0% found this document useful (0 votes)
109 views6 pages

Trade-Off Examples Inside Software Engineering and Computer Science

The document discusses tradeoffs that must be considered at different stages of a software development project. It provides examples of tradeoff methods used in four projects: 1) selecting an architecture for maintenance work by evaluating scenarios and metrics, 2) evaluating performance requirements of a credit card system by modeling workload and resource usage, 3) prioritizing quality attributes during design using utility trees, and 4) stopping testing based on estimated remaining defects versus testing costs. Formal tradeoff methods help quantify and structure the analysis to make unbiased comparisons between alternatives.

Uploaded by

Pradip Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views6 pages

Trade-Off Examples Inside Software Engineering and Computer Science

The document discusses tradeoffs that must be considered at different stages of a software development project. It provides examples of tradeoff methods used in four projects: 1) selecting an architecture for maintenance work by evaluating scenarios and metrics, 2) evaluating performance requirements of a credit card system by modeling workload and resource usage, 3) prioritizing quality attributes during design using utility trees, and 4) stopping testing based on estimated remaining defects versus testing costs. Formal tradeoff methods help quantify and structure the analysis to make unbiased comparisons between alternatives.

Uploaded by

Pradip Pawar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

___ Chapter Seven _______________________________________

7. Trade-off examples inside software engineering and

computer science

7.1. Introduction
During software development, tradeoffs are made on a daily basis by the people participating in the
development project. Different roles in the project have to handle different tradeoffs. Some examples are that
managers distribute work to developers and while doing so they have to balance the workload between the
developers and deciding how many people that should be assigned to a particular task. If more people are assigned
to a task then the task will be completed faster, but adding more people past a certain point only serves to increase
the overhead of the group and in turn increases the time it takes to complete the task. Developers in turn make
decisions regarding design and implementation details. An example is when software architects try to balance the
quality attributes of the system. A balance of functional as well as quality requirements has to be acheived so that
the intended users of the system will find it usefull.
Two extremes in the approaches to tradeoff can be identified, the first is based on the developers knowledge and
experience. By consulting earlier experiences it can be possible to make a tradeoff in an informal or ad hoc way. On
the other side of the scale we have a set of tradeoff methods. These methods describe how to perform a tradeoff,
they describes which steps are involved and what to focus on when doing the tradeoff, and so on. Continuously
throughout software development we have to perform tradeoffs. Depending on the importance and level of risk
involved, the less important ones can be performed in an ad hoc way. But if the risk or impact is more important
then they should be more thoroughly analyzed and documented before a decision is made.
The type of tradeoff that has to be considered changes depending on roles and the progress of the project
through its lifecycle. It is not the same type of trade-off that is most common during the early stages of a project as
during the later stages. For example, during the initial phases of analysis and planning, tradeoffs such as staffing
versus leadtime or leadtime versus cost have to be performed. Later in the project during the design phase of the
development, tradeoffs are made regarding for example, the choice of technology versus quality requirements and
development time. When the implementation is complete then the test phase brings its own set of tradeoffs, for
example when to stop testing versus the amount of defects expected to still be present in the system.
The critical part of a tradeoff methods is to quantify the factors that are involved, this task varies in degree of
difficulty depending on the aspects involved. Some aspects of software development and software behaviour are
rather easy to quantify, for example different aspects of performance such as time behaviour and throughput. Other
easily quantified aspects are development time, different size measures etc. Most of these can be derived from
functional rerquirements of a system. Aspects of a system that are derived from non-functional requirements are
often harder to quantify. Attributes such as usability and testability are more difficult to estimate. This makes it
more difficult to perform tradeoffs that involves one or more of the less quantifyable attributes.
Each of these trade-off examples has been researched in order to simplify and formalize the process of making
the trade-off. The formalization of how a trade-off is performed in a certain context is called a trade-off method.
Common for most trade-off methods is that they first try to quantify or structure the factors that are involved in the
trade-off. Once the quantification has been done, the actual trade-off decision is easier to make. The quantification
also makes it possible to compare different alternatives in an unbiased way (people have a tendency to root for their
own alternative and might be hard to persuade unless alternatives have been compared and evaluated in what they
perceive as a fair way). In this chapter we will take a look at some of the methods that are available for structuring
and quantifying the information necessary to make tradeoffs in some situations. We will concentrate on software
developing projects and look at four different examples where trade-off methods have been applied. Each example
project is in a different phase of the project lifecycle.
7.2. Example
After spending some time searching through publications, we identified four interesting examples that could be
used to illustrate tradeoffs at different phases and levels of a software project. The examples describe tradeoffs in the
context of maintenance, software design, and system testing. These phases and examples were chosen out of
cenvenience, tradeoffs does of course exist in other phases of development and in other domains.
For each project we will first give a short introduction, describing the context and the goal of the project. We
continue by describing the problems that they ran into and what they wanted to achieve. We will then look at which
trade-off approach that they applied and finally the outcome of the case study.

7.2.1. Example 1
This example is from the telecommunication domain, the system studied is a real-time telecommunications
system that was scheduled for maintenance [3].

Problem description
The trade-off in question is concerning the selection of the most appropriate of three architecture alternatives
for maintenance work that is going to be performed on the system. The goal is to introduce new functionality into
the system while not affecting the systems existing quality attributes negatively.

Trade-off method used


Based on the functional and quality requirements of the system, a number of scenarios are created that represent
both the day-to-day use, and the intended use of the new functionality introduced in the system. Using these
scenarios, metrics are then extracted from architecture descriptions prepared for each of the maintenance scenarios.
Since the evaluation is conducted at the architecture level, the metrics can only cover measures such as the number
of active data repositories, passive data repositories, persistent and non-persistent components, data links, control
links, logical groupings, styles and patterns and violations of the intended architecture. These metrics are collected
using a number of domain experts that estimate complexity, impact and effort for each of the scenarios for each of
the architectures using an existing architecture as a point of reference (usually the existing version of the
architecture is used as the reference). Based on the collected metrics it is then possible to compare the architecture
alternatives and based on that select the most appropriate architecture for the maintenance of the system. The
alternative architectures are assessed with mainly respect to robustness but they are also compared for reliability,
maintainability, interoperability, portability, scalability and performance. The positive or negative impact of the
changes to the architecture are collected for each of the quality attributes. The results are then collected and
prioritized in a report where all of the alternatives are presented with their respective good and bad sides (see Figure
1).
This tradeoff method helps the people that perform the tradeoff to structure the process of evaluating the
alternatives. But in the end it relies on the people performing the tradeoff to make the final decision.
Add services to Decouple Line
Selection existing Line Decouple Line Interface plus add
Criteria Interface Interface user profile
Reliability 0 +1 +1
Maintainability -1 +1 +1
Interoperability -1 +1 +2
Portability -1 +1 62
Scalability -1 +1 61
Performance 0 -1 -1
Time to Market +1 0 -1
Sum +'s 1 5 7
Sum 0's 2 1 0
Sum -'s 4 1 2
Net Score -3 4 5
Rank 3 2 1
Continue? No Yes Yes
Figure 1. The result table produced by the evaluation process.

Outcome
Using the developers knowledge about the application domain, scenarios were developed for three alternative
solutions. Each solution was documented so that it was possible to compare it with the existing architecture. The
solutions were then evaluated by the developers working on the project and compared. Of the initial three
alternatives, one was eliminated and two were selected for further investigation.

7.2.2. Example 2
The second example [5] is from a case study conducted on an american bank’s information system for handling
credit card transactions. The focus of the study is on performance attributes, and how to determine if the system will
be able to satisfy them.

Problem description
The system mainly has to fulfill two different performance requirements, one concerning execution time for
critical transactions and one concerning how much storage space that is used for each customer that is stored in the
system. Performance scenarios that are given as examples are: 1) The cancellation of lost and stolen credit cards
require very fast execution time in order to minimize the risk of financial loss. And 2) Minimizing the storage
requirements for the cardholder, due to the large amount of cards in circulation.
Several different solutions to solving each of the scenarios were proposed, each with different impact on the
performance of the system. Some affect the response time positively but would have a negative impact on the
storage space requirements. The problem that the developers are facing is to select the appropriate solutions which
together fulfill both of the performance scenarios.

Trade-off method used


The method used is to describe the quality goals is an approach described in Default [4] which focuses on the
quality goals of the system. A goal graph is created for the system in which the goals are broken down. The overall
goals of time and space performance are refined into offspring goals. The offspring goals in turn are refined into
either more offspring goals, or into “goal satisfying methods”. These methods are the suggested solutions for the
different aspects of performance in the system. In order to satisfy a goal, all its offspring goals has to be satisfied,
this continues up through the graph until the parent goals of the system are reached (see Figure 2). The goal
satisfying methods in the graph can have a positive or negative relation to both other methods and to goals. For
example, using compression to decrease the storage requirements of the system might result in that the time it takes
to modify data increases, countering the goal of quick cancellation of credit cards.
This method also helps the people that are performing the evaluation by providing a formal structure to follow.
But apart from only helping to structure the information it also tries to support the actual decision making. By
following the tree it is possible to identify the best candidate for the architecture and it also hels to document all the
alternatives that were considered during the evaluation.
Layer 4
Storage in IsA hierarchy
Stg [Attributes(Card), 4] S Attributes(Card)

Individual attributes
Stg [Card.Status, attr, 4] U S Stg [Card.OtherAttrs, attr, 4]

FormalClaim[Dominant[ FormalClaim[Dominant[
Stg [Card.Status, attr, 4]]] D S Stg [Card.OtherAttrs, attr, 4]]]

Stg [Card.Status, 4] D S Stg [Card.OtherAttrs, 4]

InformalClaim SeveralAttributesPerTuple
[”Card.Status not specialised”] S S [Card.OtherAttrs, 4]

Layer 2
HorizontalSplitting
[StorageAttributes, 2]

Stg [Card.Status, 2] D S Stg [Card.OtherAttrs, 2] StorageAttributes(Card)

InformalClaim
FormalClaim[Critical[
R[retrieve[Card.Status], 3]]] S S [”Card.OtherAttrs not accessed
during critical operations”]

EarlyFixing LateFixing
[Card.Status, 2] S S [Card.OtherAttrs, 2]

Layer 1

UncompressedFormat CompressedFormat
[Card.Status, 1] [Card.OtherAttrs, 1]

Figure 2. Example of a goal graph.

Outcome
The case study describes the successful application of the goal graph method for selecting between a number of
different techniques during the design of the credit card system. The impacts of the different suggested solutions on
the system’s quality attributes are examined using the goal graph and the methods leading to the fulfillment of the
requirements is chosen.

7.2.3. Example 3
The third example where a trade-off is present is deciding when to stop testing and release a software product to
the end users. This example is not from a case study but from an experiment documented in [2].

Problem description
When is it safe for the developers to stop the testing and release the software to the end users? Once the testing
process has begun it can basically go on forever, as it is not practically possible to prove that a system is completely,
100%, correct. So, the developers have to settle for some level of stability that can be accepted by the end users.
Normally this is achieved by testing the systems expected use, in what is called usage based testing. Usage based
testing focuses the testing efforts on the most commonly used functions in the system. Each function is graded with
a likelyhood of it being used. Then the most likely functions are tested first and most. But for how long should this
testing continue? Spending more time than necessary to achieve the required software stability and reliability is an
added cost to the development organization. If the added cost from “unnecessary” testing can be a kept at a
minimum then the development organization can save that much cost and effort.

Trade-off method used


By using reliability growth models it is possible to predict when it is time to end the test phase. The reliability
growth model that is used in the experiment uses two main measures for input. The first is the testing-effort which
has been expended, this can be measured in for example the number of testcases used, man hours spent testing, or
CPU time. The amount of testing effort that is consumed can be seen as an indication of how effectively faults are
detected in the software. The testing-effort is used together with the fault detection rate (FDR) which measures how
often new defects are found in the system. These two measures are used together to create a software reliability
growth model which can be used to predict the amount of remaining defects in the system.
The method helps to predict when it is possible to stop the testing effort, this prediction is based on metrics
from two activities. Thus the method is able to evaluate the maturity of the software system without the involvment
of the opinions software developers. This makes it a more independent tradeoff method than those that rely on input
from experts.

Outcome
The examples in [2] show when the test process has achieved a predefined goal. Using the reliability growth
models it is possible to continuously evaluate the testing process and follow the software system as the maturity
level of develops. Once the maturity level has reached a stable plateau it can be considered stable enough and
released to the users.

7.2.4. Example 4
Building systems using software components is an approach that has been presented as the future of software
systems development. Instead of creating all the parts of a new system, developers identify the functionality that has
to be provided and then buy the needed software components and build the system using them. However selecting
between different components can be a trade-off between the different quality attributes that they present.
The problem that presents itself is to identify how different components affect quality attributes of other
components in the system. These problems can range from different components expecting to have the thread of
control in the system to differences in time behavior or dynamic memory needs during runtime.

Problem description
This example focuses on the evaluation of three quality attributes of two communication components. Both the
components fulfill the functional requirements of the system, i.e. transport messages from a sender to a receiver but
have different portability, performance and maintainability characteristics. The method and evaluation is described
in detail in [6].

Trade-off method used


In order to assess the two communication components it was decided to take two different evaluation
approaches. The first was to create two prototypes that exercised the message passing parts of the two components
and gathering as much “real” performance data as possible. The prototypes were created to simulate the actual
workloads of the system as far as possible to give an accurate comparison of how the components will perform
when they are stressed. The portability was also tested using the prototypes which were moved between Windows
2000 and Linux 2.4 based platforms. The second evaluation was to make a static analysis of the components source
code, and through the analysis try to evaluate how maintainable the components were. The static analysis calculated
a maintainability index for each of the components which made it possible to compare them with a common
measure.
The final decision of which component to choose was made by the architect of the system, using experience and
the figures from the quantifications of maintainability and performance. The method is therefore to some extent
dependent on the experience of the people that perform the evaluation, since they decide which component to go for.

Outcome
Based on the results of the evaluations it was possible to see that both components showed similar levels of
portability so this attribute became less important. It was also aparent that one component had lower performance
than the other but that it on the other hand had a higher maintainability index. However, the choice of component for
use in the system fell on the other component that had higher performance as the communication performance was
considered as more important for the overall performance of the system.

7.3 Discussion
The four examples of trade-off situations and methods that have been presented give some indications of when
and where tradeoffs are performed during software development. The examples that we have looked at cover
situations ranging from the beginning of development, through the testing phase and maintenance work.
Each phase in software development has its own set of problems. In the creation of the software architecture we
have to create the architecture that is most appropriate for our functional and quality requirements. This forces us to
make tradeoffs between architecture alternatives as well as technical solutions. During this phase we do not know to
much about how the system will be implemented, since the design has not yet been completed. Therefore scenario
based evaluation methods and simulations based on formal specifications of the software architecture are commonly
used to gather data needed for the tradeoff.
Once a system has been implemented and is being tested, then we find another tradeoff in when to stop testing
and release it. The tradeoff between software robustness and the effort that has to be spent on continued testing
needs to be balanced. The analysis of when the software is mature enough can be done through the use of
mathematical models that based on metrics collected on the testing process can predict the maturity of the software.
In the maintenance phase of the software lifecycle we have to take care not to affect the systems quality
attributes in an unwanted way when changes are made. Therefore a number of alternatives for how the changes
should be made have to be created and evaluated so that the one with the most desired attributes can be identified.
This evaluation is again usually done using a scenario based approaches where a group of domain experts relies on
their experience to select the best alternative. The reason for the popularity of the scenario based approach can be
that it is easy to apply to situations where little information about the actual implementation of the system is
available. Instead we try to use experienced people for performing the evaluation, using their experience to fill in the
blanks in the available information.
Some approaches to tradeoffs are applied to several aspects of software development under different names. For
example, there are several approaches that are using scenarios to elicit and quantify aspects of for example software
architecture. The scenarios are used to make a quantification of the attributes of the architecture or architectures that
are under evaluation. But scenarios can be used during the requirements elicitation as well.
Which type of trade-off method that is applied to a problem probably changes from situation to situation. The
experience of the people facing the trade-off and the information available to them influences the choices they
make. People are probably more likely to use a formalized trade- off method the first time that they run into a trade-
off. But using the experience gained from the first trade-off they might be inclined to use a more ad-hoc method the
next time they run into a similar problem. We will always have to deal with tradeoffs during software development,
it doesn’t matter at which level in the organization that you look or where during the project lifecycle. Tradeoffs are
ubiquitous.

7.4 References
[1] J. S. Glider, C. F. Fuente, and W. J. Scales, "The software architecture of a SAN storage control system,"
IBM Systems Journal, vol. 42, pp. 232-249, 2003.
[2] C. Y. Huang, S. Y. Kuo, M. R. Lyu, “Optimal Software Release Policy Based on Cost and Reliability with
Testing Effort,” Proc. of 23rd International Computer Softwareand Applications Conference (COMPSAC’99),
PP. 468-473, 1999.
[3] C. Lung and K. Kalaichelvan, “An Approach to Quantitative Software Architecture Sensitivity Analysis,”
International Journal of Software Engineering & Knowledge Engineering, vol 10, pp. 97- 115, 2000.
[4] J. Mylopoulos, L. Chung, and B. Nixon, “Representing and Using Non-Functional Requirements: A Process-
Oriented Approach,” IEEE Transactions on Software Engineering, vol. SE-18, pp. 483-497, no. 6, June 1992.
[5] B. A. Nixon, “Dealing with Performance Requirements During the Development of Information Systems,”
Proc. of IEEE International Symposium on Requirements Engineering, pp. 42-49, 1992.
[6] F. Mårtensson, “Evaluating Software Quality Attributes of Communication Components in an Automated
Guided Vehicle System, ” proc. of IEEE International Conference on Constructing Complex Computer
Systems, 2005.

You might also like