OOSE Unit-5
OOSE Unit-5
1
Unit 5 Syllabus
Implementation: Coding & Testing
4
An Overview of Mapping
• Mappings are transformations that aim at improving one aspect
of the model while preserving functionality.
• These transformations occur during numerous object design and
implementation activities.
• Activities:
• optimizing the class model
• mapping associations to collections
• mapping contracts to exceptions
• mapping the class model to a storage schema.
5
Mapping Concepts
We distinguish four types of transformations
• Model transformations operate on object models.
An example is the conversion of a simple attribute (e.g., an address represented as a string) to a
class (e.g., a class with street address, zip code, city, state, and country attributes).
• Refactoring are transformations that operate on source code. They are similar to object model
transformations in that they improve a single aspect of the system without changing its
functionality.
• Forward engineering produces a source code template that corresponds to an object model .
• Reverse engineering produces a model that corresponds to source code. This transformation is
used when the design of the system has been lost and must be recovered from the source code
6
Model Transformation
• A model transformation is applied to an
object model and results in another object
model.
• The purpose of object model transformation
is to simplify or optimize the original model,
bringing it into closer compliance with all
requirements in the specification.
• A transformation may add, remove, or
rename classes, operations, associations, or
attributes.
• A transformation can also add information to
the model or remove information from it.
7
Refactoring
• A refactoring is a transformation of the source code that improves its
readability or modifiability without changing the behavior of the
system.
• Refactoring aims at improving the design of a working system by
focusing on a specific field or method of a class.
• To ensure that the refactoring does not change the behavior of the
system, the refactoring is done in small incremental steps that are
interleaved with tests.
• For example, the object model transformation of Figure 10-2
corresponds to a sequence of three refactoring’s.
• The first one, Pull Up Field, moves the email field from the subclasses to the superclass
User.
• The second one, Pull Up Constructor Body, moves the initialization code from the
subclasses to the superclass.
• The third and final one, Pull Up Method, moves the methods manipulating the email field
from the subclasses to the superclass 8
Pull Up Field relocates the email
field using the following steps
(Figure 10-3):
1. Examine the methods of Player that use the email field. Note that Player.notify()
uses email and that it does not use any fields or operations that are specific to
Player.
2. Copy the Player.notify() method to the User class and recompile.
3. Remove the Player.notify() method.
4. Compile and test.
5. Repeat for LeagueOwner and Advertiser.
11
Forward Engineering
• Forward engineering is applied to a set of
model elements and results in a set of
corresponding source code statements, such
as a class declaration, a Java expression, or
a database schema.
12
Reverse Engineering
• Reverse engineering is applied to a set of source code elements and results in a set
of model elements.
• The purpose of this type of transformation is to recreate the model for an existing
system, either because the model was lost or never created, or because it became
out of sync with the source code.
• Reverse engineering is essentially an inverse transformation of forward
engineering. Reverse engineering creates a UML class for each class declaration
statement, adds an attribute for each field, and adds an operation for each method
13
Transformation Principles
• A transformation aims at improving the design of the system with respect to some
criterion.
• To avoid introducing new errors, all transformations should follow these
principles:
• Each transformation must address a single criteria.
• Each transformation must be local.
• Each transformation must be applied in isolation to other changes.
• Each transformation must be followed by a validation step.
14
Mapping Activities
• Optimizing the Object Design Model
• Mapping Associations to Collections
• Mapping Contracts to Exceptions
• Mapping Object Models to a Persistent Storage Schema
15
Optimizing the Object Design Model
• During object design, we transform the object model to meet the design goals identified
during system design, such as minimization of response time, execution time, or memory
resources.
• We describe four simple but common optimizations:
• adding associations to optimize access paths,
• collapsing objects into attributes,
• delaying expensive computations, and
• caching the results of expensive computations.
• When applying optimizations, developers must strike a balance between efficiency and
clarity. Optimizations increase the efficiency of the system but also the complexity of the
models, making it more difficult to understand the system.
16
Optimizing access paths
• Common sources of inefficiency are the repeated traversal of multiple associations, the
traversal of associations with “many” multiplicity, and the misplacement of attributes.
• Repeated association traversals. To identify inefficient access paths, you should identify
operations that are invoked often and examine, with the help of a sequence diagram, the
subset of these operations that requires multiple association traversal. Frequent operations
should not require many traversals, but should have a direct connection between the
querying object and the queried object.
• “Many” associations. For associations with “many” multiplicity, you should try to
decrease the search time by reducing the “many” to “one.” This can be done with a
qualified association. If it is not possible to reduce the multiplicity of the association, you
should consider ordering or indexing the objects on the “many” side to decrease access
time.
• Misplaced attributes. Another source of inefficient system performance is excessive
modeling. After folding several attributes, some classes may not be needed anymore and
can simply removed from the model
17
Collapsing objects: Turning objects into attributes
• After the object model is restructured and optimized a couple of times, some of its classes
may have few attributes or behaviors left. Such classes, when associated only with one
other class, can be collapsed into an attribute, thus reducing the overall complexity of the
model.
The refactoring equivalent to the model
transformation of Figure is Inline Class
Refactoring:
1. Declare the public fields and methods
of the source class (e.g., SocialSecurity)
in the absorbing class (e.g., Person).
2. Change all references to the source
class to the absorbing class.
3. Change the name of the source class
to another name, so that the compiler
catches any dangling references.
4. Compile and test.
18
5. Delete the source class.
Delaying expensive computations
• Specific objects are expensive to create. However, their creation can
often be delayed until their actual content is needed.
19
Caching the result of expensive computations
• Some methods are called many times, but their results are based on
values that do not change or change only infrequently.
• Reducing the number of computations required by these methods
substantially improve overall response time.
• In such cases, the result of the computation should be cached as a private
attribute.
21
Unidirectional one-to-one associations.
24
Many-to-many associations
25
Qualified associations.
• Qualified associations are used to reduce the multiplicity of one “many” side in a
one-to-many or a many-to-many association.
26
Associations classes.
• In UML, we use an association class to hold the attributes and operations of an
association. For example, we can represent the Statistics for a Player within a
Tournament as an association class, which holds statistics counters for each
Player/Tournament combination
27
Mapping Contracts to Exceptions
• A simple mapping would be to treat each operation in the contract individually
and to add code within the method body to check the preconditions,
postconditions, and invariants relevant to the operation:
• Checking preconditions. Preconditions should be checked at the beginning of the
method, before any processing is done.
• Checking postconditions. Postconditions should be checked at the end of the
method, after all the work has been accomplished and the state changes are
finalized.
• Checking invariants. When treating each operation contract individually,
invariants are checked at the same time as postconditions.
• Dealing with inheritance. The checking code for preconditions and
postconditions should be encapsulated into separate methods that can be called
from subclasses.
28
29
Mapping Object Models to a Persistent Storage Schema
• Persistent objects are usually treated like all other objects. However,
object-oriented programming languages do not usually provide an
efficient way to store persistent objects. In this case, we need to map
persistent objects to a data structure that can be stored by the
persistent data management system decided during system design,
in most cases, either a database or a set of files.
• For object-oriented databases, no transformations need be done,
since there is a one-to-one mapping between classes in the object
model and classes in the object-oriented database. However, for
relational databases and flat files, we need to map the object model
to a storage schema and provide an infrastructure for converting
from and to persistent storage.
• A schema is a description of the data, that is, a meta-model for
data. In UML, class diagrams are used to describe the set of valid
instances that can be created by the source code.
30
• Relational databases store both the
schema and the data.
• A table is structured in columns, each
of which represents an attribute.
• A primary key of a table is a set of
attributes whose values uniquely
identify the data records in a table.
• Sets of attributes that could be used as
a primary key are called candidate
keys. Only the actual candidate key
that is used in the application to
identify data records is the primary key.
• A foreign key is an attribute (or a set
of attributes) that references the
primary key of another table.
31
Steps involved in Mapping an object model to a relational
database using Java and database schemas.
Mapping classes and attributes
• When mapping the persistent objects to relational schema, we focus
first on the classes and their attributes
• When mapping attributes, we need to select a data type for the
database column. For primitive types, the correspondence between
the programming language type and the database type is usually
trivial (e.g., the Java Date type maps to the datetime type in SQL)
• Next, we focus on the primary key. There are two options when
selecting a primary key for the table.
• The first option is to identify a set of class attributes that
uniquely identifies the object.
• The second option is to add a unique identifier attribute that we
generate.
32
• For example, in Figure 10-16, we use the login
name of the user as a primary key. Although this
approach is intuitive, it has several drawbacks. If
the value of the login attribute changes, we need
to update all tables in which the user login name
occurs as a foreign key. Also, selecting attributes
from the application domain can make it difficult
to change the database schema when the
application domain changes. For example, in the
future, we could use a single table to store users
from different Arenas. As login names are unique
only within a single Arena, we would need to add
the name of the Arena in the primary key.
• The second option is to use an arbitrarily unique
identifier (id) attribute as a primary key. We
generate the id attribute for each object and can
guarantee that it is unique and will not change.
Some database management systems provide
features for automatically generating ids.
• This results in a more robust schema and
primary and foreign keys that consist of one 33
column.
Mapping associations
• The mapping of associations to a database schema depends on the multiplicity of
the association. One-to-one and one-to-many associations are implemented as a
so-called buried association.
• Buried associations. Associations with multiplicity one can be implemented
using a foreign key. For one-to-many associations, we add a foreign key to the
table representing the class on the “many” end.
34
Separate table
• Many-to-many associations are implemented using a separate
two-column table with foreign keys for both classes of the
association
• Each row in the association table corresponds to a link between
two instances.
35
Mapping inheritance relationships
• Relational databases do not directly
support inheritance, but there are
two main options for mapping an
inheritance relationship to a
database schema.
40
Testing Concepts
• A test component is a part of the system that can be isolated for testing. A component can be
an object, a group of objects, or one or more subsystems.
• A fault, also called bug or defect, is a design or coding mistake that may cause abnormal
component behavior.
• An erroneous state is a manifestation of a fault during the execution of the system. An
erroneous state is caused by one or more faults and can lead to a failure.
• A failure is a deviation between the specification and the actual behavior. A failure is
triggered by one or more erroneous states. Not all erroneous states trigger a failure.
• A test case is a set of inputs and expected results that exercises a test component with the
purpose of causing failures and detecting faults.
• A test stub is a partial implementation of components on which the tested component
depends.
• A test driver is a partial implementation of a component that depends on the test component.
Test stubs and drivers enable components to be isolated from the rest of the system for testing.
• A correction is a change to a component. The purpose of a correction is to repair a fault. Note
that a correction can introduce new faults 41
42
Testing Activities
The technical activities of testing. These include
43
Component Inspection
• Inspections find faults in a component by reviewing its source code in a formal
meeting.
• Fagan’s inspection method consists of five steps:
• Overview. The author of the component briefly presents the purpose and
scope of the component and the goals of the inspection.
• Preparation. The reviewers become familiar with the implementation of the
component.
• Inspection meeting. A reader paraphrases the source code of the component,
and the inspection team raises issues with the component. A moderator keeps
the meeting on track.
• Rework. The author revises the component.
• Follow-up. The moderator checks the quality of the rework and may
determine the component that needs to be reinspected.
44
Usability Testing
• Usability testing tests the user’s understanding of the system.
• There are three types of usability tests
• Scenario test. During this test, one or more users are presented with a
visionary scenario of the system.
• Prototype test. During this type of test, the end users are presented with a
piece of software that implements key aspects of the system.
• A vertical prototype completely implements a use case through the
system.
• A horizontal prototype implements a single layer in the system.
• Product test. This test is similar to the prototype test except that a functional
version of the system is used in place of the prototype
45
Unit Testing
• Unit testing focuses on the building blocks of the software system, that is, objects
and subsystems.
• Motivations behind focusing on these building blocks.
• First, unit testing reduces the complexity of overall test activities, allowing us
to focus on smaller units of the system.
• Second, unit testing makes it easier to pinpoint and correct faults, given that
few components are involved in the test.
• Third, unit testing allows parallelism in the testing activities; that is, each
component can be tested independently of the others.
46
Unit Testing
• Many unit testing techniques have been devised. the most important ones are
Equivalence testing, Boundary testing, Path testing, and State-based testing.
Equivalence testing
• This Blackbox testing technique minimizes the number of test cases. The possible inputs
are partitioned into equivalence classes, and a test case is selected for each class.
• Equivalence testing consists of two steps:
• identification of the equivalence classes and
• selection of the test inputs.
The following criteria are used in determining the equivalence classes.
• Coverage. Every possible input belongs to one of the equivalence classes.
• Disjointedness. No input belongs to more than one equivalence class.
• Representation. If the execution demonstrates an erroneous state when a particular
member of a equivalence class is used as input, then the same erroneous state can be
detected by using any other member of the class as input.
47
Boundary testing
• This special case of equivalence testing focuses on the conditions at the boundary of the
equivalence classes. Rather than selecting any element in the equivalence class,
boundary testing requires that the elements be selected from the “edges” of the
equivalence class.
• The assumption behind boundary testing is that developers often overlook special cases
at the boundary of the equivalence classes.
• A disadvantage of equivalence and boundary testing is that these techniques do not
explore combinations of test input data. In many cases, a program fails because a
combination of certain values causes the erroneous fault.
• Cause-effect testing addresses this problem by establishing logical relationships
between input and outputs or inputs and transformations. The inputs are called causes,
the outputs or transformations are effects. The technique is based on the premise that the
input/output behavior can be transformed into a Boolean function.
48
Path testing
• This Whitebox testing technique identifies faults in the implementation of the
component. The assumption behind path testing is that, by exercising all possible paths
through the code at least once, most faults will trigger failures.
• The identification of paths requires knowledge of the source code and data structures.
The starting point for path testing is the flow graph.
• A flow graph consists of nodes representing executable blocks and edges representing
flow of control. A flow graph is constructed from the code of a component by mapping
decision statements (e.g., if statements, while loops) to nodes. Statements between each
decision (e.g., then block, else block) are mapped to other nodes. Associations between
each node represent the precedence relationships.
• Complete path testing consists of designing test cases such that each edge in the activity
diagram is traversed at least once. This is done by examining the condition associated
with each branch point and selecting an input for the true branch and another input for
the false branch.
49
State-based testing
• This testing technique was recently developed for object-oriented systems [Turner &
Robson, 1993]. Most testing techniques focus on selecting a number of test inputs for a
given state of the system, exercising a component or a system, and comparing the
observed outputs with an oracle.
• State-based testing, however, compares the resulting state of the system with the
expected state.
• In the context of a class, state-based testing consists of deriving test cases from the UML
state machine diagram for the class. For each state, a representative set of stimuli is
derived for each transition (similar to equivalence testing). The attributes of the class are
then instrumented and tested after each stimuli has been applied to ensure that the class
has reached the specified state.
• Currently, state-based testing presents several difficulties. Because the state of a class is
encapsulated, test cases must include sequences for putting classes in the desired state
before given transitions can be tested.
50
Integration Testing
• Integration testing detects faults that have not been detected during unit testing by
focusing on small groups of components.
• Two or more components are integrated and tested, and when no new faults are
revealed, additional components are added to the group.
• If two components are tested together, we call this a double test. Testing three
components together is a triple test, and a test with four components is called a
quadruple test.
• There are two types of Integration testing
• horizontal integration testing strategies, in which components are integrated
according to layers.
• vertical integration testing strategies, in which components are integrated
according to functions.
51
Horizontal integration testing strategies
• Several approaches have been devised to implement a horizontal integration testing
strategy: big bang testing, bottom-up testing, top-down testing, and sandwich testing.
• Each of these strategies was originally devised by assuming that the system
decomposition is hierarchical and that each of the components belong to hierarchical
layers ordered with respect to the “Call” association.
• The big bang testing strategy assumes that all components are first
tested individually and then tested together as a single system.
• The bottom-up testing strategy first tests each component of the
bottom layer individually, and then integrates them with components
of the next layer up.
• The top-down testing strategy unit tests the components of the top
layer first, and then integrates the components of the next layer down.
• The sandwich testing strategy combines the top-down and bottom-
up strategies, attempting to make use of the best of both
52
Vertical integration testing strategies
• Vertical integration testing strategies focus on early integration. For a given use case,
the needed parts of each component, such the user interface, business logic,
middleware, and storage, are identified and developed in parallel and integration
tested.
• The drawback of vertical integration testing is that the system design is evolved
incrementally, often resulting in reopening major system design decisions.
53
System Testing
• System testing ensures that the complete system complies with the functional and
nonfunctional requirements.
• During system testing, several activities are performed:
• Functional testing. Test of functional requirements (from RAD)
• Performance testing. Test of nonfunctional requirements (from SDD)
• Pilot testing. Tests of common functionality among a selected group of end
users in the target environment
• Acceptance testing. Usability, functional, and performance tests performed by
the customer in the development environment against acceptance criteria
(from Project Agreement)
• Installation testing. Usability, functional, and performance tests performed by
the customer in the target environment. If the system is only installed at a
small selected set of customers, it is called a beta test
54
Functional testing
• Functional testing, also called requirements testing, finds differences between the
functional requirements and the system.
• Functional testing is a blackbox technique: test cases are derived from the use case
model. In systems with complex functional requirements, it is usually not possible
to test all use cases for all valid and invalid inputs.
• The goal of the tester is to select those tests that are relevant to the user and have a
high probability of uncovering a failure.
Performance testing
• Performance testing finds differences between the design goals selected during
system design and the system. Because the design goals are derived from the
nonfunctional requirements, the test cases can be derived from the SDD or from
the RAD.
55
• The following tests are performed during performance testing:
• Stress testing checks if the system can respond to many simultaneous requests. For
example, if an information system for car dealers is required to interface with 6000
dealers, the stress test evaluates how the system performs with more than 6000
simultaneous users.
• Volume testing attempts to find faults associated with large amounts of data, such as
static limits imposed by the data structure, or high-complexity algorithms, or high disk
fragmentation.
• Security testing attempts to find security faults in the system. There are few systematic
methods for finding security faults. Usually this test is accomplished by “tiger teams”
who attempt to break into the system, using their experience and knowledge of typical
security flaws.
• Timing testing attempts to find behaviors that violate timing constraints described by
the nonfunctional requirements.
• Recovery tests evaluates the ability of the system to recover from erroneous states,
such as the unavailability of resources, a hardware failure, or a network failure.
• After all the functional and performance tests have been performed, and no failures have
been detected during these tests, the system is said to be validated.
56
Pilot testing
• During the pilot test, also called the field test, the system is installed and used by a
selected set of users. Users exercise the system as if it had been permanently
installed. No explicit guidelines or test scenarios are given to the users.
• Pilot tests are useful when a system is built without a specific set of requirements
or without a specific customer in mind. In this case, a group of people is invited to
use the system for a limited time and to give their feedback to the developers.
• An alpha test is a pilot test with users exercising the system in the development
environment.
• In a beta test, the pilot test is performed by a limited number of end users in the
target environment
57
Acceptance testing
• There are three ways the client evaluates a system during acceptance testing.
• In a benchmark test, the client prepares a set of test cases that represent typical
conditions under which the system should operate. Benchmark tests can be performed
with actual users or by a special test team exercising the system functions, but it is
important that the testers be familiar with the functional and nonfunctional
requirements so they can evaluate the system.
• In competitor testing, the new system is tested against an existing system or
competitor product.
• In shadow testing, a form of comparison testing, the new and the legacy systems are
run in parallel, and their outputs are compared.
• After acceptance testing, the client reports to the project manager which
requirements are not satisfied. Acceptance testing also gives the opportunity for a
dialog between the developers and client about conditions that have changed and
which requirements must be added, modified, or deleted because of the changes.
58
Installation testing
• After the system is accepted, it is installed in the target environment.
• A good system testing plan allows the easy reconfiguration of the system from the
development environment to the target environment. The desired outcome of the
installation test is that the installed system correctly addresses all requirements.
• In most cases, the installation test repeats the test cases executed during function
and performance testing in the target environment. Some requirements cannot be
executed in the development environment because they require target-specific
resources. To test these requirements, additional test cases have to be designed and
performed as part of the installation test.
• Once the customer is satisfied with the results of the installation test, system
testing is complete, and the system is formally delivered and ready for operation.
59
Managing Testing
Many testing activities occur near the end of the project, when resources are running
low and delivery pressure increases. Often, trade-offs lie between the faults to be
repaired before delivery and those that can be repaired in a subsequent revision of the
system. In the end, however, developers should detect and repair a sufficient number
of faults such that the system meets functional and nonfunctional requirements to an
extent acceptable to the client.
Planning Testing
• Developers can reduce the cost of testing and the elapsed time necessary for its
completion through careful planning.
• Two key elements are to start the selection of test cases early and to parallelize tests.
• Developers responsible for testing can design test cases as soon as the models they
validate become stable.
• The second key element in shortening testing time is to parallelize testing activities.
60
Documenting Testing
• Testing activities are documented in four types of documents,
• The Test Plan focuses on the managerial aspects of testing. It documents the
scope, approach, resources, and schedule of testing activities. The
requirements and the components to be tested are identified in this document.
• Each test is documented by a Test Case Specification. This document
contains the inputs, drivers, stubs, and expected outputs of the tests, as well as
the tasks to be performed.
• Each execution of each test is documented by a Test Incident Report. The
actual results of the tests and differences from the expected output are
recorded.
• The Test Report Summary document lists all the failures discovered during
the tests that need to be investigated. From the Test Report Summary, the
developers analyze and prioritize each failure and plan for changes in the
system and in the models. These changes in turn can trigger new test cases and
new test executions.
61
The Test Plan (TP) and the Test Case Specifications (TCS) are written early in
the process, as soon as the test planning and each test case are completed. These
documents are under configuration management and updated as the system
models change.
The Test Incident Report lists the actual test results and the failures that were
experienced. The description of the results must include which features were
demonstrated and whether the features have been met. If a failure has been
experienced, the test incident report should contain sufficient information to allow
the failure to be reproduced.
Failures from all Test Incident Reports are collected and listed in the Test Summary
62
Assigning Responsibilities
• Testing requires developers to find faults in components of the system. This is best
done when the testing is performed by a developer who was not involved in the
development of the component under test, one who is less reticent to break the
component being tested and who is more likely to find ambiguities in the
component specification.
• For stringent quality requirements, a separate team dedicated to quality control is
solely responsible for testing. The testing team is provided with the system
models, the source code, and the system for developing and executing test cases.
Test Incident Reports and Test Report Summaries are then sent back to the
subsystem teams for analysis and possible revision of the system.
• The revised system is then retested by the testing team, not only to check if the
original failures have been addressed, but also to ensure that no new faults have
been inserted in the system.
63
Regression Testing
• The changes can exercise different assumptions about the unchanged components,
leading to erroneous states. Integration tests that are rerun on the system to
produce such failures are called regression tests.
• The most robust and straightforward technique for regression testing is to accumulate
all integration tests and rerun them whenever new components are integrated into the
system. This requires developers to keep all tests up-to-date, to evolve them as the
subsystem interfaces change, and to add new integration tests as new services or new
subsystems are added.
• As regression testing can become time consuming, different techniques have been
developed for selecting specific regression tests.
• Such techniques include.
Retest dependent components
Retest risky use cases.
Retest frequent use cases
64
Automating Testing
Manual testing involves a tester to feed predefined inputs into the system using
the user interface, a command line console, or a debugger.
The tester then compares the outputs generated by the system with the expected
oracle.
The repeatability of test execution can be achieved with automation.
The benefit of automating test execution is that tests are repeatable
65
Documenting Architecture: Architectural views
• Logical view: A high-level representation of a system's functionality and how its components
interact. It's typically represented using UML diagrams such as class diagrams, sequence
diagrams, and activity diagrams.
• Deployment view: Shows the distribution of processing across a set of nodes in the system,
including the physical distribution of processes and threads. This view focuses on aspects of the
system that are important after the system has been tested and is ready to go into live operation.
• Cloud security architecture: Describes the structures that protect the data, workloads,
containers, virtual machines and APIs within the cloud environment.
• Data architecture view: Addresses the concerns of database designers and database
administrators, and system engineers responsible for developing and integrating the various
database components of the system. Modern data architecture views data as a shared asset and
does not allow departmental data silos.
• Behavioral architecture view: An approach in architectural design and spatial planning that
takes into consideration how humans interact with their physical environment from a behavioral
perspective. A behavioral architecture model is an arrangement of functions and their sub-
functions as well as interfaces (inputs and outputs).
66