0% found this document useful (0 votes)
35 views23 pages

Se IV - Total1

The document discusses various topics related to user interface design and testing, including the golden rules of interface design, interface analysis and design models, interface analysis steps, interface design steps, and types of software testing. The golden rules are to place the user in control, reduce the user's memory load, and ensure consistency. Interface analysis involves understanding users, tasks, objects, and workflows. Design steps include defining interface objects and actions, screen layout, and design patterns. Testing types include unit, integration, system, and usability testing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views23 pages

Se IV - Total1

The document discusses various topics related to user interface design and testing, including the golden rules of interface design, interface analysis and design models, interface analysis steps, interface design steps, and types of software testing. The golden rules are to place the user in control, reduce the user's memory load, and ensure consistency. Interface analysis involves understanding users, tasks, objects, and workflows. Design steps include defining interface objects and actions, screen layout, and design patterns. Testing types include unit, integration, system, and usability testing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Software Engineering

Class – I UNIT – IV DT: 18 /12/2020

User Interface Design: The Golden Rules, User Interface Analysis and Design, Interface
Analysis, Interface Design Steps, Design Evaluation.
Coding and Testing: Coding, Code Review, Software Documentation, Testing, Testing in the
Large versus Testing in the Small, Unit Testing, Black-Box Testing, White-Box Testing,
Debugging, Program Analysis Tools, Integration Testing, Testing Object-Oriented Programs,
System Testing.

User interface design creates an effective communication medium between a human and
a computer.
1st Topic) The Golden Rules – These applied to all human interaction with technology products.
Theo Mandel coins proposed three golden rules –
1 Place the user in control.
2. Reduce the user’s memory load.
3. Make the interface consistent.
These golden rules actually form the basis for a set of user interface design principles that
guide this important aspect of software design.
1. Place the User in Control – user should control the system. Mandel defines a number of
design principles that allow the user to maintain control:
• Define interaction modes in a way that does not force a user into unnecessary or
undesired actions.
• Provide for flexible interaction.
• Allow user interaction to be interruptible and undoable.
• Allow the interaction to be customized.
• Hide technical internals from the casual user.
• Design for direct interaction with objects that appear on the screen.
2. Reduce the User’s Memory Load – The more a user has to remember, the more error-
prone the interaction with the system will be.
• Reduce demand on short-term memory.
• Establish meaningful defaults.
• Define shortcuts that are intuitive.
• The visual layout of the interface should be based on a real-world metaphor.
• Disclose information in a progressive fashion.
3. Make the Interface Consistent – The interface should present and acquire information in
a consistent fashion.
• Allow the user to put the current task into a meaningful context.
• Maintain consistency across a family of applications.
• If past interactive models have created user expectations, do not make changes
unless there is a compelling reason to do so.
Software Engineering
Class – II UNIT – IV DT: 10 /01/2021
N.T User Interface Analysis and Design – It begins with the creation of different models.
❖ Interface Analysis and Design Models – Four different models come into play when a
user interface is to be analyzed and designed. A Human Engineer establishes a “User Model”. It
establishes the profile of end users of the system. The software engineer creates a design model,
the end user develops a mental image that is often called the user’s mental model or the system
perception and the implementers of the system create an implementation model.
User interface includes different types of users like Novices - who have little semantic
knowledge of the application, intermittent users – who have reasonable semantic knowledge but
low syntactic information and Frequent users – who have good semantic and syntactic knowledge.
❖ The Process – The analysis and design process is iterative for the User Interface. It refers
4 distinct F/W activities.
• Interface Analysis & Modeling – It focus on profile of the users.
• Interface Design – It Define set of interface objects and actions.
• Interface Construction – It begins with the creation of a prototype that enables
usage scenarios to be evaluated.
• Interface Validation – It focus on the ability of the interface to implement every
user task correctly and to achieve all general user requirements.
N.T) Interface Analysis - A key tenet of all software engineering process models is this:
understand the problem before you attempt to design a solution.
❖ User Analysis – Understand the user with the following categories.
• User Interviews - S/W team must meet the end user to understand the user needs.
• Sales i/p – sales people must meet with users & gather the related information.
• Marketing i/p – It is crucial which comes in segments.
• Support i/p – support staff talks with users on a daily basis.
❖ Task analysis & Modeling – It should answer the following questions
• What work will the user perform in specific circumstances?
• What tasks and subtasks will be performed as the user does the work?
• What specific problem domain objects will the user manipulate as work is
performed?
• What is the sequence of work tasks—the workflow?
• What is the hierarchy of tasks?
These can be answered with some techniques like
• Use Cases – Describe the manner in which an actor interacts with a system.
• Task elaboration – is a mechanism for refining the 1st task which must be
definable and classified.
• Object elaboration –focused on physical objects which are categorized into
classes. It includes attributes and operations.
• Workflow Analysis – Helpful to understand how a work process is completed
when several people involved.
Ex: Swim lane diagram.

• Hierarchical representation – Hierarchy is derived by a stepwise elaboration of


each task for identified users.
❖ Analysis of display content – for modern applications display content can be character
based reports like graphical or specialized forms.
❖ Analysis of the work Environment – user friendly location ie., physical environment
factors also include in work place culture. Which rise the questions like
• Will two or more users share information?
• How will support be provided to the users?
N.T) Interface Design Steps – is an iterative process. The interface design models suggest some
combination of following steps.
• Define interface objects and actions.
• Define events
• Depict each state which is intended to the end user.
• Indicate how user interprets the state of the system.
❖ Applying interface design steps – It is necessary to define the list of objects and actions.
Also include their type like Target, source and application. Where Source object is dragged &
dropped onto a target object. Once objects and actions have been defined screen layout is
performed.
Ex: safe home security - A preliminary sketch of the screen layout for video monitoring is
created

Software Engineering
Class – III UNIT – IV DT: 19 /01/2021
N.T) Interface Design Steps – is an iterative process. The interface design models suggest some
combination of following steps.
• Define interface objects and actions.
• Define events
• Depict each state which is intended to the end user.
• Indicate how user interprets the state of the system.
❖ Applying interface design steps – It is necessary to define the list of objects and actions.
Also include their type like Target, source and application. Where Source object is dragged &
dropped onto a target object. Once objects and actions have been defined screen layout is
performed.
Ex: safe home security - A preliminary sketch of the screen layout for video monitoring is created.
Based on the use case, the following homeowner tasks, objects, and data items are identified:
• accesses the SafeHome system
• enters an ID and password to allow remote access
• Checks system status
• arms or disarms SafeHome system
• displays floor plan and sensor locations
• displays zones on floor plan
• changes zones on floor plan
• displays video camera locations on floor plan
• selects video camera for viewing
• views video images (four frames per second)
• pans or zooms the video camera
Objects (boldface) and actions (italics) are extracted from this list of homeowner tasks. A
preliminary sketch of the screen layout for video monitoring is created shown in below Fig.

➢ User interface design patterns – GUI become more common user interface design pattern
that has emerged. These patterns provide a design solution to the well formed problems.
Ex: calendar date selection.
➢ Design issues – there are four common design issues, system response time, user help
facilities, error information handling, and command labeling.
• The System response time measured from the point at which the user performs
some control action until the software responds with desired output or action. It has two important
characteristics like length and variability.
• Help facilities include- online help and user manuals.
• Error handling – application xyz forced to quit due to an error type 1023. In general,
every error message or warning produced by an interactive system should have the following
characteristics:
✓ The message should describe the problem in jargon that the user can understand.
✓ The message should provide constructive advice for recovering from the error.
✓ The message should indicate any negative consequences of the error
✓ The message should be accompanied by an audible or visual cue
✓ The message should be “nonjudgmental.” That is, the wording should never place
blame on the user.
N.T) Design Evaluation – Interface must be evaluated to meet the user needs. The user interface
evaluation cycle takes the form shown in below Fig.
After the design model has been completed, a first-level prototype is created. The prototype
is evaluated by the user. The prototyping approach is effective, but is it possible to evaluate the
quality of a user interface before a prototype is built? If a design model of the interface has been
created, a number of evaluation criteria [Mor81] can be applied during early design reviews. Once
the first prototype is built, you can collect a variety of qualitative and quantitative data that will
assist in evaluating the interface.
Software Engineering
Class – IV UNIT – IV DT: 21 /01/2021
Coding and Testing: Coding, Code Review, Software Documentation, Testing, Testing in the
Large versus Testing in the Small, Unit Testing, Black-Box Testing, White-Box Testing,
Debugging, Program Analysis Tools, Integration Testing, Testing Object-Oriented Programs,
System Testing.
1ST Topic) Coding – Coding is undertaken once the design phase is completed and the design
documents have been successfully reviewed. In the coding phase, every module specified in the
design document is coded and unit tested.
The input to the coding phase is the design document produced at the end of the design
phase. The objective of the coding phase is to transform the design of a system into code in a high-
level language, and then to unit test this code.
The main advantages of a standard style of coding are
• A coding standard gives a uniform appearance to the codes written by different
engineers.
• It facilitates code understanding and code reuse.
• It promotes good programming practices.
➢ Coding standards and guidelines – A good software development organization usually
develop their own coding standards and guidelines which best suits their requirements.
Representative coding standards –

• Rules for limiting the use of global – These rules list what types of data can be declared
global and what cannot.
• Standard headers for different modules – The header of different modules should have
standard format and information for ease of understanding and maintenance.
• Naming conventions for global variables, local variables, and constant identifiers:
• Global variable names would always start with a capital letter (e.g., Global Data) and local
variable names start with small letters (e.g., local Data). Constant names should be formed
using capital letters only (e.g., CONSTDATA).
Representative Coding guidelines –
• Do not use a coding style that is too clever or too difficult to understand - Code should be
easy to understand.
• Avoid obscure (uncertain) side effects – An obscure side effect is one that is not obvious
from a casual examination of the code. Obscure side effects make it difficult to understand a
piece of code.
• Does not use an identifier for multiple purposes – Programmers often use the same
identifier to denote several temporary entities.
• Code should be well-documented – there should be at least one comment line on the
average for every three source lines of code.
• Length of any function should not exceed 10 source lines – A lengthy function is usually
very difficult to understand and it includes large no of bugs.
• Do not use Go To statements – because it makes programs becomes unstructured and
difficult to understand.
N.T) Code Review – Testing is an effective defect removal mechanism. However,
testing is applicable to only executable code. Review is a very effective technique to remove
defects from source code. Code review for a module is undertaken after the module
successfully compiles. Code review has been recognized as an extremely cost-effective
strategy for eliminating coding errors and for producing high quality code.
(Testing is the process of identifying defects, where a defect is any variance between actual
and expected results. Defect is an error found AFTER the application goes into production.)
In general there are two types of reviews are carried out on the code - Code inspection and
Code walkthrough.
• Code Inspection – the code is examined for the presence of some common programming
errors. The principal aim of code inspection is to check for the presence of some common types
of errors like programmer mistakes and oversights and to check whether coding standards have
been adhered to.
Common programming errors during code inspection are
• Use of uninitialized variables.
• Jumps into loops.
• Non-terminating loops.
• Incompatible assignments.
• Array indices out of bounds.
• Improper storage allocation and reallocation.
• Mismatch between actual and formal parameter in procedure calls.
• Use of incorrect logical operators or incorrect precedence among operators.
• Improper modification of loop variables.
• Comparison of equality of floating point values.
• Dangling reference caused when the referenced memory has not been allocated.
• Code Walk through – Code walkthrough is an informal code analysis technique. In
this technique, a module is taken up for review after the module has been coded,
successfully compiled, and all syntax errors have been eliminated.
Software Engineering
Class – V UNIT – IV DT: 22 /01/2021
N.T) Software Documentation - When software is developed, in addition to the executable files
and the source code, several kinds of documents such as users’ manual, software requirements
specification (SRS) document, design document, test document, installation manual, etc., are
developed as part of the software engineering process. These documents are helpful in the
following ways:
• To understand the code and to reduce time and effort required for maintenance.
• To understand the system effectively.
• To tackle the manpower turnover problems. ( replacing an employee in ones place)
• Helps the manager to track the project progress.
Software Document can be categorized into two parts such as Internal Documentation and
External Documentation.
1. Internal Documentation – is the code comprehension features provided in the source code
itself. Internal documentation can be provided in the code in several forms. like
• Comments embedded in the source code.
• Use of meaningful variable names.
• Module and function headers.
• Code indentation.
• Code structuring (i.e., code decomposed into modules and functions).
• Use of enumerated types.
• Use of constant identifiers.
• Use of user-defined data types.
Apart from all code commenting is the most useful one. When a piece of code is carefully
commented, meaningful variable names have been found to be the most helpful in understanding
the code.
Ex: a=10; /* a made 10 */
2. External Documentation - is provided through various types of supporting documents
such as users’ manual, software requirements specification document, design document, test
document, etc. An important feature that is required of any good external documentation is

consistency with the code, for better understanding the code. An important feature required for
external documents is proper understandability by the category of users for whom the
document is designed, which is achieved through Gunning’s fog index.
Gunning’s fog index (developed by Robert Gunning in 1952) is a metric that has been
designed to measure the readability of a document.
The Gunning’s fog index of a document D can be computed as follows:

The fog index is computed as the sum of two different factors. The first factor computes
the average number of words per sentence the second factor measures the percentage of complex
words in the document. A syllable is a group o f words that can be independently pronounced.
N.T) Testing – Is a process to check whether the actual results match the expected results and to
ensure that that the s/w system is defect free. Simply is a process of identifying defects.
• Mistakes in coding called as Errors.
• Error found by the tester is called Defect.
• If the defects are accepted by the development team then it is called as “Bug”.
How to test a program? It involves executing the program with a set of test inputs and observes
the program behavior. If the program fails to behave as expected, then the input data and the
conditions are noted for debugging and error correction.

The terms error, fault, bug, and defect are considered to be synonyms in the area of program
testing.
• Failure – is denotes as an incorrect behavior exhibited by the program during its execution.
Every failure is caused by some bugs present in the program.
• A test case is a triplet [I , S, R], where I is the data input to the program under test, S is the
state of the program at which the data is to be input, and R is the result expected to be
produced by the program.
• A test scenario is an abstract test case in the sense that it only identifies the aspects of the
program that are to be tested without identifying the input, state, or output.
• A test script is an encoding of a test case as a short program, which are developed for
automated execution of the test cases.
➢ Verification Vs Validation – Verification is the process of checking that
software achieves its goal without any bugs. It is the process to ensure whether the product
that is developed is right or not. ... Validation is the process of checking whether
the software product is up to the mark or in other words product has high level
requirements.
Verification includes review, simulation, formal verification, and testing. Review,
simulation, and testing are usually considered as informal verification techniques. Formal
verification usually involves use of theorem proving techniques or use of automated tools
such as a model checker.
Validation techniques are primarily based on product testing system testing can be
considered as a validation step.
Verification does not require execution of the software, whereas validation requires
execution of the software. Verification is carried out during the development process to
check if the development activities are proceeding alright, whereas validation is carried out
to check if the right as required by the customer has been developed.
Software Engineering
Class – VI UNIT – IV DT: 28 /01/2021
❖ Testing Activities - Testing involves performing the following main activities:
• Test suite design - The set of test cases using which a program is to be tested is
designed possibly using several test case design techniques.
• Running test cases and checking the results to detect failures: Each test case is
run and the results are compared with the expected results. A mismatch between the actual
result and expected results indicates a failure.
• Locate Error – the failure symptoms are analyzed to locate the errors.
• Error correction – After the error is located during debugging, the code is
appropriately changed to correct the error.

❖ Why Design Test case – Testing a software using a large collection of randomly selected
test cases does not guarantee that all (or even most) of the errors in the system will be uncovered.
There are essentially two main approaches to systematically design test cases:
• Black-box approach
• White-box (or glass-box) approach.
In the black-box approach, test cases are designed using only the functional specification
of the software. That is, test cases are designed solely based on an analysis of the input/out
behavior (that is, functional behavior) and does not require any knowledge of the internal
structure of a program.
On the other hand, designing white-box test cases requires a thorough knowledge of the
internal structure of a program, and therefore white-box testing is also called structural testing.
❖ A software product is normally tested in three levels or stages:
• Unit testing
• Integration testing
• System testing
Unit testing is referred to as testing in the small, whereas integration and system
testing are referred to as testing in the large.
During unit testing, the individual functions (or units) of a program are tested. After testing
all the units individually, the units are slowly integrated and tested after each step of integration
(integration testing). Finally, the fully integrated system is tested (system testing).
N.T) Unit Testing – Unit testing is undertaken after a module has been coded and reviewed.
This activity is typically undertaken by the coder of the module himself in the coding phase.
❖ Driver and stub modules – the following are needed to test the module:
• The procedures belonging to other modules.
• Non-local data structures that the module accesses.
• A procedure to call the functions of the module under test with appropriate
parameters.
In order to provide the supported environment Stubs and drivers are designed to, so that
testing can be carried out.
The role of stub and driver modules is shown in the following Figure
Stub - A stub procedure is a dummy procedure that has the same I/O parameters as the
function called by the unit under test but has a highly simplified behavior.
Driver – A driver module should contain the non-local data structures accessed by the
module under test.
N.T) Black – Box Testing – Test cases are designed from an examination of the input/output
values only and no knowledge of design or code is required. The following are the two main
approaches available to design black box test cases:
• Equivalence class partitioning
• Boundary value analysis.
➢ Equivalence class partitioning – the domain of input values to the program under
test is partitioned into a set of equivalence classes. Equivalence classes for a unit under test can be
designed by examining the input data and output data. There are two general guidelines for
designing the equivalences.
1. If the input data values to a system can be specified by a range of values, then one valid
and two invalid equivalence classes need to be defined.
2. If the input data assumes values from a set of discrete members of some domain, then one
equivalence class for the valid input values and another equivalence class for the invalid
input values should be defined.
➢ Boundary value analysis -
Software Engineering
Class – VII UNIT – IV DT: 29 /01/2021
N.T) White Box Testing – Is an important type of Unit Testing. A white-box testing strategy can
either be coverage-based or fault based. A fault-based testing strategy targets to detect certain
types of faults an example of fault based testing is Mutation Testing. Coverage Based Testing
attempts to execute (or cover) certain elements of a program. Examples are statement coverage,
branch coverage, multiple condition coverage, and path coverage-based testing. A coverage-based
testing strategy typically targets to execute (i.e., cover) certain program elements for discovering
failures.
❖ Stronger Vs Weaker Testing -
A white-box testing strategy is said to be stronger than another strategy, if the stronger
testing strategy covers all program elements covered by the weaker testing strategy. When none
of two testing strategies fully covers the program elements exercised by the other, then the two are
called complementary testing strategies.
Ex: As shown in fig(a)A is stronger than B since B covers only a proper subset of elements
co. On the other hand, Figure 10.6(b) shows A and B are complementary testing strategies since
some elements of A are not covered by B and vice versa.

❖ Statement coverage – The statement coverage strategy aims to design test cases
so as to execute every statement in a program at least once. It is obvious that without executing a
statement, it is difficult to determine whether it causes a failure due to illegal memory access,
wrong result computation due to improper arithmetic operation, etc.

❖ Branch coverage – A test suite satisfies branch coverage, if it makes each branch
condition in the program to assume true and false values in turn. Branch testing is also known as
edge testing, since in this testing scheme, each edge of a program’s control flow graph is
traversed at least once.
❖ Multiple condition coverage – In the multiple condition (MC) coverage-based
testing, test cases are designed to make each component of a composite conditional expression
to assume both true and false values.
Ex: Consider the composite conditional expression ((c1 .and. c2 ).or.c3). A test suite would
achieve MC coverage, if all the component conditions c1, c2 and c3 are each made to assume
both true and false values.
❖ Path coverage – A test suite achieves path coverage if it executes each linearly
independent paths (or basis paths) at least once. A linearly independent path can be defined in
terms of the control flow graph (CFG) of a program.
❖ Control Flow Graph – A control flow graph describes how the control flows
through the program. It describes the sequence in which the different instructions of a program
get executed. A CFG is a directed graph consisting of a set of nodes and edges (N, E), such that
each node n 􀀀 N corresponds to a unique program statement and an edge exists between two
nodes if control can transfer from one node to the other. We can easily draw the CFG for any
program, if we know how to represent the sequence, selection, and iteration types of statements
in the CFG.
Ex:

❖ McCabe’s Cyclomatic Complexity Metric – McCabe obtained his results by applying


graph-theoretic techniques to the control flow graph of a program. McCabe’s cyclomatic

complexity defines an upper bound on the number of independent paths in a program. We discuss
three different ways to compute the cyclomatic complexity.
Method –I: Given a control flow graph G of a program, the cyclomatic Complexity V (G)
can be computed as:

V (G) = E – N + 2
Where, N is the number of nodes of the control flow graph and E is the number of edges
in the control flow graph.
Method –II: An alternate way of computing the cyclomatic complexity of a program is
based on a visual inspection of the control flow graph. In this method, the cyclomatic complexity
V (G) for a graph G is given as
V (G) = Total number of non-overlapping bounded areas + 1
It is the easiest method but what if the graph G is not planar (i.e., however you draw the
graph, two or more edges always intersect).So for non-structured programs, this way of computing
the McCabe’s cyclomatic complexity does not apply.
Method – III: The cyclomatic complexity of a program can also be easily computed by
computing the number of decision and loop statements of the program. If N is the number of
decision and loop statements of a program, then the McCabe’s metric is equal to N + 1.
Steps to carry out path coverage-based testing –
1 Draw control flow graph for the program.
2 Determine the McCabe’s metric V (G).
3 Determine the cyclomatic complexity. This gives the minimum number of test cases
required to achieve path coverage.
4 repeat
Uses of McCabe’s cyclomatic complexity metric
• Estimation of structural complexity of code
• Estimation of testing effort
• Estimation of program reliability

Software Engineering
Class – VIII UNIT – IV DT: 2 /02/2021

N.T Debugging – After a failure has been detected, it is necessary to first identify the program
statement(s) that are in error and are responsible for the failure, the error can then be fixed.
It consist the following Approaches –
• Brute force method – This is the most common method of debugging but is the least
efficient method. In this approach, print statements are inserted throughout the program to print
the intermediate values with the hope that some of the printed values will help to identify the
statement in error. This approach becomes more systematic with the use of a symbolic debugger.
• Backtracking – In this approach, starting from the statement at which an error symptom
has been observed, the source code is traced backwards until the error is discovered.
• Cause limitation method – In this approach, once a failure is observed, the symptoms of
the failure are noted.
• Program slicing - This technique is similar to back tracking. In this approach search space
is reduced by defining slices.
❖ Debugging guidelines –
• Many times debugging requires a thorough understanding of the program design.

• Debugging may sometimes even require full redesign of the system.

• One must be beware of the possibility that an error correction may introduce new errors.
N.T) Program Analysis Tool – is an automated tool that takes either the source code or the
executable code of a program as input and produces reports regarding program, such as its size,
complexity, commenting, programming standards, adequacy of testing, etc. program analysis tool
classify into two broad categories.
• Static analysis tool
• Dynamic analysis tool
• Static analysis tool – It assesses and computes various characteristics of a program without
executing it. It analyzes the source code to compute certain metrics characterizing the source code
and also report certain analytical conclusions. The report in the analysis conclusion are
• To what extent the coding standards have been adhered to?
• Displaying results related to variables.
A major practical limitation of the static analysis tools lies in their inability to analyze run-
time information such as dynamic memory references using pointer variables and pointer
arithmetic, etc.
• Dynamic analysis tools – used to evaluate several program characteristics based on an
analysis of the run time behavior of a program. It collects execution trace information by
incrementing the code. The instrumented code when executed, records the behavior of the software
for different test cases. It carries out a post execution analysis and produces reports which describe
the coverage that has been achieved by the complete test suite for the program.
N.T) Integration Testing – Integration testing is carried out after all (or at least some of ) the
modules have been unit tested. The objective of integration testing is to detect the errors at the
module interfaces. During integration testing, different modules of a system are integrated in a
planned manner using an integration plan. An important factor that guides the integration plan is
the module dependency graph. Any one (or a mixture) of the following approaches can be used
to develop the test plan:
• Big-bang approach to integration testing
• Top-down approach to integration testing
• Bottom-up approach to integration testing
• Mixed (also called sandwiched) approach to integration testing.
▪ Big bang is the most obvious approach to integration testing. In this approach all
the modules making up a system are integrated in a single step. The drawback is very difficult to
localize the error as the error may potentially lie in any of the modules, so it becomes expensive
to fix.
▪ The top-down integration testing applied for subsystems and test whether the
interfaces among various modules works well or not? Top-down integration testing starts with the
root module then test the immediate lower level modules. It does not require any drivers. A
disadvantage is that in the absence of lower-level routines, it becomes difficult to exercise the top-
level routines.
▪ The mixed (also called sandwiched) integration testing follows a combination of
top-down and bottom-up testing approaches. It overcomes this shortcoming of the top-down and
bottom-up approaches. Both stubs and drivers are required in this approach.
N.T) Testing Object Oriented Programs – testing object-oriented programs is much more
difficult and requires much more cost and effort as compared to testing similar procedural
programs because of various object oriented features are introduced and the scope of the new type
of bugs that are present in the procedural programs. So design of additional test cases is required.
A method has to be tested with all the other methods and data of the corresponding object. Also a
method needs to be tested at all the states that the object can assume. An object is the basic unit of
testing of object-oriented programs.
Implications of different object-orientation features in testing –
• Encapsulation – as far as testing is concerned, encapsulation is not an obstacle to testing,
but leads to difficulty during debugging. Encapsulation prevents the tester from accessing the
data internal to an object.
• Inheritance – it helps in code reuse and was expected to simplify testing. Even if the base
class has been thoroughly tested, the methods inherited from the base class need to be tested again
in the derived class.
• Dynamic binding- Dynamic binding was introduced to make the code compact, elegant,
and easily extensible. However, as far as testing is concerned all possible bindings of a method
call have to be identified and tested. This is not easy since the bindings take place at run-time.
• Object states - The behavior of an object is usually different in different states, ie some
methods may not be active in some of its states. So testing an object in only one of its states is not
enough.
Finally traditional testing approaches are not suitable for object oriented techniques.
• Gray box testing of object oriented programs – For object-oriented programs, several
types of test cases can be designed based on the design models of object-oriented programs. These
are called the grey-box test cases. The following are some important types of grey-box testing that
can be carried on based on UML models:
• State model based testing – which include state coverage, state transition coverage and
also path coverage.
• Use case based testing – Each use case typically consists of a mainline scenario and
several alternate scenarios.
• Class diagram based testing – All derived classes of the base class have to be instantiated

and tested.

• Integration testing of object oriented programs –


There are two main approaches to integration testing of object-oriented programs
• Thread-based
• Use based
➢ In thread based approach all classes that need to collaborate to realize the behavior
of a single use case are integrated and tested. After all the required classes for a use case are
integrated and tested another use case is taken up.
➢ Use –based – Use-based integration begins by testing classes that either need no

service from other classes or need services from at most a few other classes.
N.T) System Testing – Given as Assignment

1.Explain the Rules proposed by Theo Mandel coins

2. List the interface design steps with the help of an example.

3. Write short note on code review, code inspection and SRS

4. Define Testing. List the Testing Activities.

5. Write short note on CFG and McCabes Cyclomatic complexity.

6. Discus about the importance of System Testing.

You might also like