0% found this document useful (0 votes)
30 views

Accepted Manuscript: Journal of King Saud University - Computer and In-Formation Sciences

The document discusses test case prioritization techniques and provides a survey of 90 scholarly articles on the topic from 2001 to 2018. It describes different prioritization approaches, the most commonly used metrics and subject programs, and the distribution of prioritization techniques explored in the literature.

Uploaded by

FNMAMS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Accepted Manuscript: Journal of King Saud University - Computer and In-Formation Sciences

The document discusses test case prioritization techniques and provides a survey of 90 scholarly articles on the topic from 2001 to 2018. It describes different prioritization approaches, the most commonly used metrics and subject programs, and the distribution of prioritization techniques explored in the literature.

Uploaded by

FNMAMS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Accepted Manuscript

A Survey on Different Approaches for Software Test Case Prioritization

Rajendrani Mukherjee, K. Sridhar Patnaik

PII: S1319-1578(18)30361-6
DOI: https://doi.org/10.1016/j.jksuci.2018.09.005
Reference: JKSUCI 501

To appear in: Journal of King Saud University - Computer and In-


formation Sciences

Received Date: 17 April 2018


Revised Date: 26 July 2018
Accepted Date: 4 September 2018

Please cite this article as: Mukherjee, R., Sridhar Patnaik, K., A Survey on Different Approaches for Software Test
Case Prioritization, Journal of King Saud University - Computer and Information Sciences (2018), doi: https://
doi.org/10.1016/j.jksuci.2018.09.005

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A Survey on Different Approaches for Software Test Case Prioritization

Abstract
Testing is the process of evaluating a system by manual or automated means. While Regression Test Selection(RTS)
discards test cases and Test Suite Minimization(TSM) shows diminution in fault detection rate, Test Case Prioritiza-
tion(TCP) does not discard test cases. Test Case Prioritization techniques can be coverage or historical information
based or model based. It can also be cost-time aware or requirement-risk aware. GUI/Web applications need special
prioritization mechanism. In this paper, 90 scholarly articles ranging from 2001 to 2018 have been reviewed. We have
explored IEEE, Wiley, ACM Library, Springer, Taylor & Francis and Elsevier database. We have also described each
prioritization method with their findings and subject programs. This paper includes a chronological catalogue listing of
the reviewed papers. We have framed three research questions which sum up the frequently used prioritization metrics,
regularly used subject programs and the distribution of different prioritization techniques. To the best of our knowledge,
this is the first review with a detail report of the last 18 years of TCP techniques. We hope this article will be beneficial
for both beginners and seasoned professionals.
Keywords: regression, prioritization, techniques, program, fault, coverage

1. Introduction quirement and risk aware prioritization techniques pro-


poses PORT(Prioritization of Requirements for Testing)
The main objective of testing is to make sure the deliv- (Srikanth et al., 2005). Even though severely criticized as
erable software has no bugs. The testing phase of SDLC is test case selection mechanism, historical record of previous
an elaborate task in terms of cost, time and resource. In test run data (Kim and Porter, 2002) also helps in priori-
Test First approach (Erdogmus et al., 2005), test suites are tizing test cases. Knapsack solvers (Alspaugh et al., 2007)
built before the coding starts. A study involving under- or ILP(Integer Linear Programming)(Zhang et al., 2009a)
graduate students showed that as developers write more help in prioritizing test cases in time constrained environ-
tests per unit of programming, higher productivity and ment. If time constraint is not applied then the benefit of
consistent quality is ensured (Erdogmus et al., 2005). ordering test cases becomes negligible because TCP tech-
Regression testing ensures no new errors have been in- niques have their own implementation overhead. As gath-
troduced in the changed application (Chittimalli and Har- ering execution information is a costly effort, Model based
rold, 2009). However execution of all scheduled test cases TCP is becoming popular nowadays (Korel et al., 2005).
is next to impossible because of time, cost, and resource Event Driven Software(EDS) like GUI/Web based applica-
constraint. If a VB application has 5 screens where each tions generate bulk amount of test cases from all possible
screen has 4 combo boxes and each combo box has 5 items combination of events(mouse click, menu open etc.) and
then the application will yield 5 *4 * 5 = 100 tests. For need ordering of test case mechanism.
large systems it is not viable to check every data or input The ongoing importance of TCP techniques encour-
combination. From this point, the concept of RTS (Re- aged us to prepare this review article. The paper attempts
gression Test Selection), TSM (Test Suite Minimization) to make the following contribution:
and TCP(Test Case Prioritization ) gained value among
researchers. While test case selection discards several test 1. In this paper, we have reviewed 90 scholarly articles
cases (Rothermel and Harrold, 1996), test suite reduction from 2001 to 2018 on TCP. As the review work by
shows remarkable loss of fault detection rate (Rothermel Yoo and Harman (2007) narrated 47 papers on TCP
et al., 1998 ; Elbaum et al. 2003). Test Case Prioriti- till 2009, we wanted to bridge the gap by increas-
zation techniques do not discard test cases and run the ing the time frame and paper numbers. The chosen
most significant test cases first (Rothermel et al.) .TCP is papers in this review are well-sampled as we have
an Improvement Testing mechanism (Juristo et al., 2004) considered more than 5 papers in each of the above
which can be applied at initial phase of testing too. mentioned prioritization technique.
The test case prioritization problem is addressed in sev- 2. This review paper also describes each prioritization
eral ways by several researchers. While Coverage aware mechanism along with their techniques, results, sub-
prioritization techniques focus on maximizing coverage in- ject programs and shortcomings in detail. While
formation (statement/branch/function coverage etc.), re- Catal and Mishra (2013) built a systematic mapping

Preprint submitted to Elsevier July 24, 2018


study on TCP, it lacked the technical details of sub- and Boundary value analysis are different forms of func-
ject programs, results, shortcoming etc. To the best tional testing strategy. They do not need knowledge of
of our knowledge, this is the first detail-oriented re- internal source code. The main advantage of black box
view report of the last 18 years of TCP techniques. oriented technique is that it does not need to collect and
3. We have also built a chronological catalogue at the store coverage information (Henard et al., 2016). In case
end of each subsection. The catalogue will act as a of Control Flow testing , detail knowledge of the source
summarized snapshot. code is required. The program flow is analyzed by se-
4. The final motivation of this review article is to help lecting a set of paths (a code chain that goes from be-
the beginners as well as the experts who are con- ginning to the end of a program). Statement Coverage,
ducting research on TCP domain. Three research branch/decision coverage, MC/DC are different form of
questions have been designed and we hope their find- control flow testing strategy. Branch coverage is stronger
ings are valuable. This review paper addresses the than statement coverage. MC/DC is stronger than state-
following RQs- ment and branch coverage but weaker than condition cov-
(a) RQ1: Which metrics are used often for TCP? erage. Data flow testing techniques also need knowledge of
(b) RQ2: What are the frequently used subject pro- source code. Mutation testing involves producing a series
grams in prioritization studies? of mutants. Faulty versions of programs are called mu-
(c) RQ3: Which prioritization methods are com- tants which are produced by altering program statements.
monly explored and what are their proportion? A mutant is considered killed if the test suite can detect
the fault. Scalability is an unsolvable issue for mutation
Our review is structured as follows. Section 2 discusses testing as big programs can have significant number of
basic facts and recent trends in Testing. Section 3 presents mutants. Regression testing ensures that a program mod-
the definition of Test Case Prioritization problem and its ification is not causing existing features to regress. Some-
background with related work. Section 4 narrates our times modifying a program or fixing a bug may create a
search strategy and analyzes the findings from Research new one. Random, Retest-all, Safe ( Example- Dj vu, Test
Questions. Section 5 describes each of the prioritization Tube, Textual differencing) are several regression testing
methods with their shortlisted papers and chronological techniques. The retest-all strategy is applicable for small
catalogue listing. Section 6 discusses the scope of future programs but fails for big programs. A safe regression test
work and concludes our review. selection technique chooses each of the test case that re-
veals at least one fault. As safe techniques require less
time to run it is appropriate for large programs.
2. Basics of Testing - Facts and Trends
Nowadays, agile regression testing is becoming popu-
Before getting into the details of test case prioritiza- lar. While performing agile testing, test cases get executed
tion, in this section we would like to summarize the ne- after each sprint/delivery window. In Test First approach
cessity and types of testing along with some recent trends (Feldt et al., 2016), test suites are built first before the
and developments in testing tools. coding starts. Testing becomes an overhead activity if it
Apart from maintaining the quality of a product, test- starts only after coding is complete. Continuous testing
ing also ensures proper working of the expanded version overcomes this drawback (Saff and Ernst 2003; Elbaum
of a software. In today’s competitive market, a software et al. 2014; Memon et al. 2017). In Continuous Testing,
system should evolve to stay pertinent . As testing by intu- test scripts are run in between program changes. In case of
ition depends on individual performance, rigorous testing incremental testing, testing is done very frequently and in
by a dedicated team of testing professional is always rec- between two updates. For batch testing processes, testing
ommended. Insufficient testing causes missed faults which window is sufficiently stretched and usually performed at
requires more cost to fix with a noticeable amount of re- night.
work. A defect detected at field costs 1000 times more We would also like to enlist some testing tools which
than found in requirement analysis phase (Srikanth et al., made testing easier. Quality Center is a requirement and
2009). Coders at Google make more than 20 changes per test management tool offered by Hewlett Packard. UFT
minute in the source code and 100 million test cases get (Unified Functional Testing) is another automated test-
executed per day (Thomas et al., 2014).A test suite is a ing tool by HPE which performs regression testing for UI
collection of test cases. A test case is a triplet of input, based/ non UI based test cases. It was formerly known as
program state and expected result. Test suites can be QTP(Quick Test Professional). JaBUTi (Java Bytecode
reused later and test cases actually determine the success Understanding and Testing) is a coverage analysis tool for
and maturity of testing. Java applications. Jumble helps in measuring the cover-
Now we would like to sum up several testing tech- age of JUnit tests. Jumble operates at test class level.
niques. Functional testing techniques are considered as Jester is another open source lightweight tool for testing
black box testing because they study the inputs and its Java applications which finds uncovered code. JMeter is
corresponding outputs only. Equivalence class partitioning an open source load testing tool which helps in monitoring
performance. Selenium is a capture and replay based au-
2
tomation tool for testing web applications. Selenium pro- Definition 1. For a test suite T discover T 0 in such a way
vides a portable framework. WinRunner and LoadRunner that f (T 0 ) ≥ f (T 00 ),
are another testing tools designed by HPE. Seque’s Silk where f is the quantifiable performance or goal function,
Test and Silk Performer help in doing regression testing P T is the set derived from all possible prioritization or-
and web/mobile application testing respectively. All these derings of T , T 0 6= T 00 , T 0 ∈ P T and T 00 ∈ P T .
testing tools have equipped testing techniques to find more
bugs easily. Juristo et al. (2004) referred Test Case Prioritization
as an Improvement Testing as it can be associated with
any other technique to increase fault detection rate. In
3. Test Case Prioritization - Background and Re-
Test Case Prioritization, test cases are ordered based on
lated Work
some criteria . The goal of test case prioritization can
In this section we would like to narrate the origin and be manifold. It can be like increasing fault detection rate
importance of test case prioritization along with a concise or increasing capture of high priority requirements or de-
snapshot of several important related work in this field. creasing the cost and time of prioritization mechanism.
Regression testing accounts for 80 % of the testing Even though test case prioritization is mainly applied for
budget (Chittimalli and Harrold, 2009). Implementing regression testing, it can also be applied for software main-
changed as well as new requirements, revalidating the soft- tenance or at initial phase of testing.
ware, providing quick bug fix are important part of re- In this paper, we have reviewed 90 scholarly articles
gression testing. Regression testing is very much time and from 2001 to 2018 on Test Case Prioritization techniques.
resource constrained and it is encountered frequently. Re- Several researchers have addressed test case prioritization
gression testing of real time embedded system is heavily problem in several ways. Section 5 of this paper narrates
time constrained as their simulation environment is very each publication with their applied methodology, subject
demanding and hosts multiple projects(Kim and Porter, programs, outcomes and shortcomings. Coverage aware
2002). Retest-All,Regression Test Selection (RTS),Test prioritization techniques mainly aim at maximizing cover-
Suite Minimization (TSM) and Test Case Prioritization age as an intermediate goal while ultimate goal is improv-
(TCP) are the predominant regression testing techniques. ing APFD (Average Percentage of Faults Detected) rate.
Each of these approaches has their own advantages and Historical information based TCP techniques utilizes his-
disadvantages. The Retest-All strategy holds good when torical record about previous run of test case or historical
the test suite is small. However as test suite scales up, an fault information. Cost centric prioritization schemes fo-
ordering mechanism becomes necessary. Rothermel and cuses on building cost models by subdividing cost into sev-
Harrold (1996) discussed several regression test selection eral parameters like cost of analysis, cost of maintenance,
techniques . A safe regression test selection technique cost of execution etc. A new metric AP F DC is used which
chooses each of the test case that reveals at least one fault considers varying fault severity and test case cost .Time
but it still does not ensure safe selection because the cri- aware TCP techniques use knapsack solvers or ILP (Inte-
teria under which safety prevails does not always get ful- ger Linear Programming) to address the issue of time con-
filled (Rothermel et al., 2001). Unsafe test case selection straint in regression testing. As the main aim of testing
technique discards several test cases . Wong et al. (1998) is to find out whether the delivered product meets cus-
showed that Test Suite Minimization shows very minimal tomer’s requirement, RBT(Requirement Based Testing) is
decrease( 2% -7%) in fault detection rate but several stud- becoming popular.Clustering of requirements and select-
ies contradicted this fact (Elbaum et al. 2003 ; Rothermel ing more test cases from higher priority clusters are also
et al., 1998). They argued that in many cases, Test Suite gaining attention. Risk Exposure based prioritization is
Reduction process ensures 100% coverage of requirements, yielding high fault detection rate . Stallbaum et al. (2008)
but does not assure same fault detection ability as the ac- generated test cases from activity diagrams and prioritized
tual test suite. Elbaum et al. (2003) showed remarkable them based on the associated risks. Model based prior-
loss (60%) of fault detection rate while running minimiza- itization mechanism is another possibility of addressing
tion techniques.Test Case Prioritization (TCP) overcomes TCP problem where specification models (event state di-
these drawbacks of selection or reduction mechanism by agram/activity diagram) represent system behaviour and
not discarding test cases. According to TCP , test cases the model execution information is used to order test cases.
with higher priority gets executed earlier while conducting TCP techniques for GUI/Web based Applications has the
testing. If conducted offline, TCP will save time and cost edge of using user session data as test data. Bryce and
and will not become an overhead . Test case prioritiza- Memon (2007) proposed Interaction Coverage based TCP.
tion can be of two types - general test case prioritization Several studies conducted by Srivastava and Thiagarajan
and version specific test case prioritization. In general (2002), Nardo et al. (2015) proved that TCP techniques
test case prioritization , the prioritization ordering is use- scale well for real world systems of million LOC.
ful for successive modified versions of a program. However In the next section, we would like to explore some Re-
for version specific test case prioritization, the ordering is search Questions regarding Test Case Prioritization. The
beneficial for one particular version only. RQs will help the researchers to gain some important in-

3
sights about the frequently used metric for prioritization,
commonly used subject programs for study, prominent re-
searchers in this field etc.

4. Research Methodology

4.1. Search Strategy


We have searched for papers in the following reposi-
tory: IEEE Explorer, Wiley Online Library, ACM Digital
Library, Springer, Taylor & Francis, Elsevier . We have
also selected conference and workshop proceedings . Even
though initial search retrieved many studies, after exam-
ining the title, abstract , citations and relevancy we have
narrowed down our chosen papers to 90. All the 90 pa-
pers are read in full length before preparing this review.
The sample of 90 chosen papers is rich in diversity as it Figure 1: Distribution of Prioritization Techniques
considers more than 5 papers in each category of prioritiza-
tion scheme - Coverage aware TCP, Historical information
4.3. RQ2 - What are the frequently used subject programs
based TCP, Time-Cost centric TCP, Model based TCP,
in prioritization studies?
Requirement-Risk aware TCP, GUI/Web Application fo-
This RQ investigates commonly used subjects (pro-
cused TCP etc.
grams, applications) for experimentation of TCP techniques.
Inclusion and Exclusion Criteria:
We have come across several C/JAVA programs which are
We came across 3 review papers (Catal and Mishra,2013;
repeatedly used for TCP studies. Certain case study ap-
Yoo and Harman,2012; Hao et al.,2016) on TCP before
plications or web based applications are also chosen recur-
this. The review work of Yoo and Harman considered 47
rently. Table 3 enlists the findings for RQ2.
papers till 2009(Yoo and Harman,2012). We wanted to
expand the horizon by selecting 90 papers till 2018. The 4.4. RQ3 - Which prioritization methods are commonly
systematic mapping study done by Catal and Mishra in explored and what are their proportion?
2013 did not explore the technical details of each prior-
There is a wide variety of available prioritization tech-
itization methodology(Catal and Mishra,2013). We have
niques. The aim of this research question is to categorize
tried to explain each prioritization method with their tech-
them and find out their proportion. We have chosen 90
niques, outcome and subject programs. The chronological
publications from last 18 years (2001- 2018). Section 5 of
catalogue listing of the selected papers at the end of each
this paper details each of this prioritization scheme.
subsection is another added feature of this review work.
To the best of our knowledge, this is the first review work 1. Coverage aware methods are most prevalent(10%)
with a summary report of the last 18 years of TCP tech- while requirement based methods (7.77%) are second
niques.Papers narrating test case generation, test case ex- best one.
ecution are excluded. 2. Time , cost and historical information based TCP
Research Questions: techniques are also becoming popular (5.5 % in each
We have also framed 3 research questions .The findings cases).
of these RQs will make the review useful for expert as well 3. 4.4% of the chosen papers involved real time systems.
as beginners who are willing to conduct research on TCP 4. Search algorithms based ordering is explored in 4.4%
techniques. Table 1 depicts the RQs with their benefit. and risk aware ranking is observed in 3.3% of the
total selected papers.
4.2. RQ1 - Which metrics are used often for TCP? 5. Even though coverage aware methods are dominant,
Average Percentage of Fault Detected (APFD) is the 2.2% of the papers researched algorithms without
most dominant metric for addressing prioritization.Catal coverage information.
and Mishra (2013) have stated that 34% of the chosen pa- 6. If one technique is used in only one paper, then that
pers have used APFD metric. However apart from APFD, technique is shown in others category.
researchers have proposed Savings Factor(SF), Coverage Fig.1 charts the distribution of several TCP techniques.
Effectiveness(CE) , AP F DC (APFD per cost), NAPFD We have also charted a year wise distribution of pa-
(Normalized APFD) etc. for measuring the effectiveness of pers in Fig.2. Test case prioritization studies gained peak
prioritization mechanism. Table 2 tabulates several useful attention from 2007 to 2010. It also surged during 2016.
metrics. After seeing the year wise distribution we can conclude
that interest of researchers in test case prioritization is
very much active.
4
Table 1: Research Questions and Motivation

Research Questions Benefit from Findings

RQ1: Which metrics are used often for TCP? This RQ will help in rebuilding the existing metrics while
proposing new one.

RQ2: What are the frequently used subject programs in New researchers in this field will find it easier to replicate
prioritization studies? the existing studies with these subject programs. If a new
method is proposed it can also be verified using these sub-
jects.

RQ3: Which prioritization methods are commonly ex- This RQ will give an idea about which techniques can be
plored and what are their proportion? explored more.

Table 2: Metrics Used by Several Prioritization Studies

Abbreviation Full Form Explanation

APFD Average Percentage of Fault Detected APFD Value ranges between 0 to 100 where higher value
indicates better fault detection.

SF Savings Factor SF translates APFD into benefit scale. SF 1000 indicates


1% of APFD gain creates a savings of 1000 dollars.

CE Coverage Effectiveness CE value ranges between 0 to 1. CE considers the cost and


coverage of individual test case. If test requirements are
fulfilled then a high CE is achieved.

CI Convergence Index CI helps in deciding when to stop testing.

AP F DC APFD per cost Units of fault severity detected per unit of test cost is cal-
culated.

NAPFD Normalized APFD This metric considers both fault detection and time of de-
tection.

ASFD Average Severity of Faults Detected ASFD for requirement i is the ratio of the summation of
severity values of faults detected for that requirement di-
vided by TSFD(Total Severity of Fault Detected).

RP Most Likely Relative Position An average relative position of the first failed test case that
finds a defect is used. This metric is primarily used for
model based TCP.

APDP Average Percentage of Damage Pre- APDP is used to measure the effectiveness of Risk Based
vented Test Case Derivation and Prioritization.

5
Table 3: Subject Programs Used by Several Prioritization Studies

Program/Application Name Language/Platform

tcas (138LOC), schedule2 (297LOC), These are 8 C Programs from SIR(Software Artifact Infrastructure
schedule (299 LOC), tot info (346 Repository) which is publicly available . First 7 are called Siemens
LOC), print tokens (402 LOC), Program and the eight program is a program from European Space
print tokens2 (483 LOC), replace (516 Agency.
LOC), space (6218 LOC)

grep(7451 LOC), flex(9153 LOC), UNIX utility program


sed(10 KLOC),

ant(80.4 KLOC), jmeter(43.4 KLOC), JAVA programs


xml-sec (16.3 KLOC), jtopas (5.4
KLOC),

Siena (≥ 2.2 KLOC) JAVA program

Bash (≥ 50 KLOC) A shell program that provides interface to Unix services

Empire(63 KLOC) Implemented in C. Open source software - a game played between


several players.

GradeBook , JDepend Case Study Application. Performs grading and Generates metrics.

TerpOffice(≥ 90 KLOC) Includes TerpWord, TerpPaint, TerpCalc etc.

6. Model based Prioritization


7. GUI/Web Application focused Prioritization
8. Other Special Approaches (Ant Colony Optimiza-
tion Techniques, Search Algorithms, Clustering Tech-
niques etc.)
For each publication, methods applied for ordering test
cases, subject programs of experimentation and the out-
come along with shortcomings are detailed in a catalogue
listing in chronological order. The catalogue will also act
as a summarized snapshot of the discussed publications. A
separate subsection is devoted for prioritization techniques
applied on real world/industrial case studies.
Figure 2: Year wise Paper Publications on Test Case Prioritization
5.1. Coverage Aware Test Case Prioritization Techniques
5. Review of Different Approaches for Test Case Coverage aware prioritization techniques aim to max-
Prioritization imize coverage of program elements (statement/ branch/
methods etc.) by a test case and need detail knowledge of
In this section we would like to highlight 90 prominent source code. It basically follows white box/structural test-
research publications from each avenues of test case pri- ing approach (Fig.3). Achieving maximum coverage may
oritization. Test case prioritization techniques can be of be visualized as an intermediate goal while the ultimate
several approaches - goal is enhanced fault detection rate.
Now we would like to highlight 9 research papers which
1. Coverage based Prioritization
have added a new dimension to code coverage based pri-
2. Historical Information based Prioritization oritization techniques. Coverage based test case prioriti-
3. Cost Aware Prioritization zation techniques came into attention in 2001 as Rother-
4. Time Aware Prioritization mel et al. showed that better coverage yields better fault
5. Requirement and Risk Aware Prioritization
6
Figure 3: Methodology of Coverage Aware TCP Technique

detection rate (Rothermel et al., 2001). Nine prioritiza- test cases for the first time, this study also added a new
tion techniques were applied on eight C programs from viewpoint in terms of time and money. If the prioritiza-
SIR(Software Artifact Infrastructure Repository) . The tion method involves feedback strategy then the cost of
most insightful outcome of this study revealed that for the prioritization increases. However this study neither con-
first seven programs (called as Siemens Program) the total sidered other test suites nor it incorporated object oriented
techniques outperformed the additional technique but for features( inheritance, encapsulation, polymorphism etc.)
the eighth program, Space, which is a real and huge pro- while testing Java programs. We would like to mention
gram from European Space agency , the additional tech- another experimentation executed by Do and Rothermel
niques performed better. The additional techniques are (2006) which studied the same open source Java Programs
called feedback or greedy strategy as they iteratively first of the previous work with mutation faults. This study
selects a test case of highest coverage, then adjusts the eliminated the shortcoming of the previous work by consid-
coverage details of the remaining test cases for elements ering TSL (Test Specification Language) test suite.In an-
(statement/branch etc) not yet covered. The shortcom- other research conducted in 2007, Kapfhammer and Soffa
ing of this study was that it considered all faults of same , introduced a metric CE(Coverage Effectiveness) for or-
severity and did not incorporate cost factor. In continu- dering test suites.If test requirements are fulfilled quickly
ation with Rothermel’s study, Elbaum et al. conducted then a high CE is achieved (Kapfhammer and Soffa, 2007).
another empirical assessment which classifies the prioriti- Fang et al. (2012) executed another experimentation which
zation techniques as coarse granularity(function level tech- focused on the term logic coverage testing . Branch cover-
niques) and fine granularity(statement level techniques) age, modified condition/decision coverage are several logic
(Elbaum et al., 2002). The concept of SF(Savings Factor) coverage testing method. Logic coverage is widely ac-
was introduced in this work. SF is a metric that trans- cepted for safety - critical software. The authors coined
lates APFD measure into benefit scale. SF 1000 indicates the term CI(Convergence Index) which is used to decide
1% APFD gain creates a savings of 1000 dollars. The when to stop testing.A quick convergence indicates low
study concludes that the fine granularity techniques out- testing cost and rapid fault detection rate.
performs the coarse granularity techniques by a very little In almost all the previous studies, the authors men-
edge. This marginal gain is not beneficial as for larger tioned about significant performance margin between pri-
systems statement level techniques are too tedious and oritized orderings and optimal orderings. However, we
expensive while function level techniques are less expen- would like to highlight a recent work by Hao et al. (2016a)
sive. Jones and Harrold (2001) for the first time applied which implemented Optimal coverage based prioritization
Test Case Prioritization techniques on MC/DC test suite . technique by ILP(Integer Linear Programming) and ruled
The authors proposed a build up technique(test cases are out that fact .The study concluded that the Optimal tech-
getting added to an empty test suite) and a break down nique is notably worse than additional technique in terms
technique (removing low contribution test cases) by iden- of fault detection rate or execution time. Table 4 repre-
tifying essential and redundant test cases. sents a concise snapshot of the above papers.
All of the previous studies mainly used C programs Though there is a logical establishment that greater
for implementing the prioritization prototypes. We would coverage indicates greater effectiveness, coverage is just
like to bring up the work conducted by Do et al. (2006) one parameter for prioritizing test cases. A maximum cov-
which for the first time focused on prioritizing JUnit test erage does not always ensure that all faults will be covered.
cases.For JUnit framework , test cases were categorized As the main goal of testing is to ensure the delivery of a
into two segments - test cases at test class stage and test quality product within a stipulated time and budget, TCP
cases at test method stage. Apart from analyzing JUnit techniques should be time, cost and requirement centric.

7
Table 4: Summary of Publications on Coverage Aware Prioritization Techniques

Authors and Prioritization Technique Results Subject Program


Year

(Rothermel Nine prioritization tech- Prioritization of 8 C programs


et al., 2001) niques (total/additional etc.) test cases help in (Siemens and
were introduced. increasing APFD. Space program)

(Jones and Prioritization techniques For larger systems Tcas, Space


Harrold, 2001) were applied on MC/DC test build up technique
suite. is beneficial as it re-
quires less time.

(Elbaum et al., Concept of coarse granular- Fine granularity Real time embed-
2002) ity(function level), fine gran- techniques outper- ded system QTB,
ularity (statement level) were forms the coarse UNIX utilities flex
introduced. granularity tech- and grep
niques by a very
little edge.

(Do et al., Coverage based techniques In some cases (ant Four Java programs
2006) were applied for JUnit test and xml-security) - ant, jmeter, xml-
cases for the first time. non-control tech- security and jtopas
niques performed
better.

(Do and Use of Mutation faults were For ant the non- Two Java
Rothermel, studied. control techniques programs- galileo
2006) performed better and nanoxml in
while for jtopas TSL test suite
it showed no and Four Java
improvement. programs ant, xml-
security, jmeter,
jtopas in JUnit

(Kapfhammer Coverage Effectiveness was A high CE is


and Soffa, used to rank test cases. achieved if test
2007) requirements are
fulfilled.

(Fang et al., The concept of logic coverage Quick convergence 3 Java Programs
2012) and convergence index were is necessary for low - Tcas, NanoXML,
introduced. cost and rapid fault Ant
detection.

(Hao et al., ILP was used to implement Optimal method is Siemens and Space
2016b) Optimal Coverage based pri- worse than addi- program, 2 Java
oritization. tional technique. programs ( jtopas
and siena)

8
Table 5: Test case Selection Parameters severity data. Fazlalizadeh et al. (2009) mentioned that
finding an optimal execution order of test cases does not
Variables Description
have any deterministic solution. Kim and Baik (2010) pro-
posed FATCP (Fault Aware Test Case Prioritization) that
TC Test Case utilized historical fault information by using fault local-
T Time instant ization technique. Fault Localization Technique is actu-
Htc Time ordered observations ally a debugging activity which points out the location
{h1 , h2 , . . . , ht } obtained from previous of a fault or a set of faults in a program.FATCP outper-
run formed branch coverage and statement coverage prioriti-
α Weight factor for Individual Observa- zation methods for most of the Siemens program. Huang
tion et al. (2012) proposed MCCTCP(Modified Cost Cognizant
• Lower value indicates older ob- TCP) based on test case execution history. MCCTCP
servation does not need analysis of source code and it feeds histor-
• Higher value indicates recent ob- ical information of each test case to a GA(genetic algo-
servation
rithm).Table 6 represents a concise snapshot of the above
papers.
Ptc , t(Htc , α) Selection Probability of each test case

5.3. Cost Cognizant Test Case Prioritization Techniques


While discussing coverage aware prioritization proto-
Leon and Podgurski (2003) compared the differences be- types, we have talked about Savings Factor (SF). Determi-
tween coverage based and distribution based technologies nation of SF is basically an attempt to convert APFD into
(how the execution profile of test cases are distributed) and a benefit scale. Even though test case prioritization may
proved that distribution based techniques may be more cause savings, it has its own execution/implementation
efficient than coverage based techniques . Several Event overhead. Building a cost model is henceforth necessary
Driven Software (GUI/Web Applications) and real world and in this subsection we would like to focus on 5 pub-
case studies from industry also have special challenges for lications which calculates cost benefit tradeoffs for TCP
prioritizing test cases. All these understanding has forced techniques.
the test case prioritization researchers explore other op- Malishevsky et al. (2002) subdivided cost into several
tions. In the subsequent subsections, we would be narrat- sub parameters .Table 7 shows the proposed cost catego-
ing other test case prioritization schemes - history based rization.
TCP, time/cost aware TCP, model based TCP, require- Another study conducted by Elbaum et al. (2004) in-
ment based TCP etc. one by one. troduced the concept of cost-benefit threshold which in-
dicates a borderline percentage which needs to be crossed
5.2. Historical Information Based Test Case Prioritiza- in order to produce a positive/effective APFD gain .Mali-
tion Techniques shevsky et al. (2006) proposed a cost aware TCP technique
Most of the proposed prioritization techniques are mem- that overcomes the shortcomings of APFD metric. They
ory less. However as regression testing is not one time ac- introduced AP F Dc which considers varying fault severity
tivity some kind of memory function should be associated and test case cost. This new metric is termed as ’units-of-
with it. In this subsection, we would like to highlight 5 fault-severity-detected-per-unit-of-test-cost’.While plotting
research publications which consider test case execution AP F Dc , the x-axis indicates percentage of total test case
history. cost incurred instead of percentage of test suite executed
Kim and Porter (2002) proposed a TCP method using (as in APFD). The y-axis indicates percentage of total
historical information about test case performance record . fault severity detected instead of percentage of faults de-
The authors assign a selection probability for each test case tected. We would like to bring up another study conducted
which is described in Table 5. The concept of fault age was by Smith and Kapfhammer (2009).The experimentation
also introduced by this work.If a fault remains undetected included 8 real world case study applications to show the
by a test run, then the fault age increases. However this effect of cost on building smaller and faster fault detec-
technique is strongly criticized by many as it is essentially tion worthy test suite.Testing system configurable software
a test case selection mechanism. was experimented by Srikanth et al. (2009) and they have
Park et al. (2008) criticized the existing TCP tech- shown that cost of configuration and set up time plays an
niques as value-neutral .The existing methods consider all important role.Table 8 represents a concise snapshot of the
faults of same severity and all test cases of same test cost. above papers.
However this is not the real case. The authors introduced
a Historical Value-Based technique with a cost centric ap- 5.4. Time Aware Test Case Prioritization Techniques
proach. A historical information repository module was Regression testing has severe time and resource con-
used to keep record of test case’s previous cost and fault straints. Not only for regression testing, nightly build and

9
Table 6: Summary of Publications on History based Prioritization Techniques

Authors and Year Prioritization Technique Results Subject Program

(Kim and Porter, 2002) A TCP method using histor- The authors assign a selection 8 C Program (Siemens and
ical information about test probability for each test case. Space)
case was proposed.
(Park et al., 2008) A Historical Value-Based Test case execution cost and 8 versions of open source Java
technique with a cost centric fault severity are important program ant.
approach was proposed. factor.
(Fazlalizadeh et al., 2009) The priority of a test case in Optimal execution of test 8 C Program (Siemens and
previous regression test ses- cases do not have determin- Space)
sion was considered. istic solution.
(Kim and Baik, 2010) FATCP (Fault Aware Test FATCP outperformed branch 8 C Program (Siemens and
Case Prioritization) was coverage and statement cov- Space)
used. erage method.
(Huang et al., 2012) MCCTCP(Modified Cost Improvement in fault detec- Two UNIX utilitis
Cognizant TCP) was utilized tion was observed. • sed
for prioritization.
• flex

Table 7: Cost parameters for Prioritizing Test Cases for executing the test suite) should contain distinct items
(test cases) each with its own value (percentage of code
Cost Variable Explanation
coverage of each test case) and weight (execution time of
each test case).Zhang et al. (2009a) used ILP (Integer Lin-
Ca(T ) Cost of Analysis ear Programming) Techniques for prioritizing test cases in
Cm(T ) Cost of Maintenance time-constrained environment . This is the first attempt
Ce(T ) Cost of Execution to apply ILP for time-aware TCP. The only drawback of
ILP based approach is that as it requires more analysis
Cs(T ) Cost of Selection
time it appears time consuming for large test suite. Do
Cc(T ) Cost of Result Checking and Mirarab (2010) cited an example of a software devel-
Cf (F (T ) \ F (T 0 )) Cost of missing faults by opment organization which has a regression test suite of
not selecting T\T’ where T’ 30,000 test cases.It takes 1000 machine hours to run all
is the Selected Test Suite
(set difference - set of elements the test cases. Apart from execution time, significant time
present in F(T) but not in F(T’)) is needed for setting up test bed, monitoring results etc.
Cf (Fk (T ) \ Fk (T 0 )) Cost of omitting faults where Fk (x) is The authors raised an important fact that if no time con-
the set of regression faults on version straint is placed, prioritization becomes non cost effective.
vk detected by Test Suite x An economic model EVOMO( EVOlution-aware economic
MOdel) was used for analyzing several cost factors. Ef-
fect of individual test case cost was explored by You et al.
batch techniques are also severely time pressed. Extreme (2011) . Five time budgets (5%, 25%, 50%, 75% and 100%
programming techniques require frequent and fast execu- of execution time of entire test suite) were set and the au-
tion of test cases. We would like to narrate 5 publications thors concluded that although it is slightly better to track
which explored time aware prioritization with an applica- individual test case cost, the benefit is marginal.Table 9
tion of Genetic Algorithms or ILP (Integer Linear Pro- represents a concise snapshot of the above papers.
gramming) or Knapsack solvers.
Time aware TCP techniques were first introduced by 5.5. Requirement and Risk oriented Test Case Prioritiza-
Walcott et al. (2006). This is the first study which con- tion Techniques
cretely incorporates testing time budget . GAPrioritize is Mogyorodi (2001) of Starbase Corporation, overviewed
the proposed Genetic Algorithm which identifies the tu- Requirement Based Testing(RBT) process with Caliber-
ple with maximum fitness.The drawback of this study is RBT which can design minimum number of test cases from
that it considers each test case to be independent and have requirements. The author has stated that a problem was
no execution ordering dependency. Alspaugh et al. (2007) presented with 137,438,953,472 test cases and CaliberRBT
showed the usage of 0/1 knapsack solvers to finish the test- solved the problem with only 22 test cases. The cost to fix
ing activity within a stipulated amount of time . A knap- an error is lowest if it is found in the requirements phase.
sack with maximum fixed capacity (maximum time limit The distribution of bug report indicates that 56% of all

10
Table 8: Summary of Publications on Cost Cognizant Prioritization Techniques

Authors and Year Prioritization Technique Results Subject Program

(Malishevsky et al., 2002) Cost was subdivided into sev- Equations for calculating cost bash (size > 50 KLOC)
eral sub parameters. for selection, reduction, and
prioritization mechanism was
built.
(Elbaum et al., 2004) The concept of cost-benefit The preferred technique 8 C programs (bash,emp-
threshold was introduced. changes as the cost benefit server,sed, xearth, grep, flex,
threshold increases. make, gzip)
(Malishevsky et al., 2006) The authors introduced In 74.6% of cases, AP F Dc Empire(a game played be-
AP F Dc which considers based method showed a worse tween several players)
varying fault severity and rate of fault detection.
test case cost.
(Smith and Kapfhammer, The concept of cumulative Cost has effect on building 8 real world case study appli-
2009) coverage function was intro- smaller and faster fault detec- cations were studied.
duced. tion worthy test suite.
(Srikanth et al., 2009) Cost of configuration and set History of escaped faults help Two releases of a large legacy
up time plays an important in prioritizing system config- system were studied.
role. urations.

Table 9: Summary of Publications on Time Aware Prioritization Techniques

Authors and Year Prioritization Technique Results Subject Program

(Walcott et al., 2006) The first study which in- GA based algorithms per- Two case study applications
cludes testing time budget. formed extremely well (up to JDepend and Gradebook
120% improvement).
(Alspaugh et al., 2007) Knapsack solvers were used Testing activity can be fin- JDepend and Gradebook
ished within a stipulated
amount of time.
(Zhang et al., 2009a) ILP was used to prioritize All the ILP based techniques JDepend and JTopas
test cases. outperformed GA based tech-
niques.
(Do and Mirarab, 2010) EVOMO (EVOlution-aware Test case prioritization be- Java programs - ant, xml-
economic MOdel) was pro- comes non cost effective if security, jmeter, nanoxml,
posed. there is no time constraint. and galileo
(You et al., 2011) Effect of individual test case Benefit of individual test case Siemens and Space program
cost was explored. cost tracking is marginal.

11
bugs are rooted in the requirements phase while design and originated from each requirements and then prioritizes test
coding phase yields 27% and 7% respectively(Mogyorodi, cases based on that. Table 11 represents a concise snap-
2001). RBT helps in conducting testing in parallel with shot of the above papers.
development. In this case, testing is not a bottleneck. A
study conducted by Uusitalo et al. (2008) showed that test- 5.6. Model based Test Case Prioritization techniques
ing and requirements engineering are strongly linked. In Gathering execution information is a costly effort in
this subsection, we would like to focus on 7 key publica- terms of time, money and resource. Also as source code
tions which analyzed requirement based TCP techniques. changes with respect to new requirements , execution in-
The benefits of requirement based test case prioriti- formation needs timely upgrades making maintenance dif-
zation was explored since 2005 through several studies.If ficult. TCP techniques based on models gain interest in
all requirements are given equal importance then a value such cases. Model based TCP is basically a grey box
neutral approach gets created . To overcome this neu- oriented approach where specification models ( state dia-
trality, the authors designed PORT (Prioritization of Re- grams/activity diagrams etc) are utilized to represent ex-
quirements for Testing) which is value driven and sys- pected behaviour of the system. Each test case is linked to
tem level prioritization scheme (Srikanth and Williams, an execution path in the model. The term grey-box is used
2005).PORT considers four factors - CP(Customer As- as the information about the internal data structure and
signed Priority), IC(Requirement Implementation Com- architecture of the system is required but the source code
plexity), RV(Requirement Volatility) and FP(Fault Prone- is not required. As execution of model is fast compared to
ness) of requirements(Fig.4). CP is a measure of signifi- actual system , model based TCP is a profitable option.
cance of requirement from customer’s business value point Model based testing (MBT) is also gaining interest as an
of view and RV indicates how many times the require- test automation approach. In this subsection, we would
ment has changed. For industrial projects RV is high like to highlight 5 noteworthy publications which focused
while for stable projects RV is low. Implementation Com- on model based test case prioritization.
plexity ranges from 1 to 10 while larger value indicates Korel et al. (2005) focused on MBT(Model based test-
higher complexity. FP includes both number of field fail- ing) for system models. System models help in under-
ures and developmental failures while coding a require- standing system’s behaviour. State based models are ex-
ment. Based on these 4 values, PFV(Prioritization Factor ecuted with a test suite and the execution information is
Value) is computed for each requirement and it is used used to prioritize tests. The authors use EFSM(Extended
to derive Weighted Priority(WP) of each associated test Finite State Machine) as the modelling language which
cases. consists of states and transitions between states.Three sys-
Zhang et al. (2007) presented a metric ’units-of-testing- tem models (ATM model, Cruise Control model and Fuel
requirement-priority-satisfied-per-unit-test-case-cost’. As Pump model) averaging 7 to 13 states and 20 to 28 tran-
testing requirement priority changes frequently and test sitions were experimented and results indicated promising
case costs also vary this metric becomes necessary .An- improvement in prioritization. Faults were seeded in the
other study conducted by Krishnamoorthi and Sahaaya models. A very important conclusion came from the study
Arul Mary (2009) proposed two more factors, Complete- that monitoring only modified transitions is not that effec-
ness and Traceability, as regression test case factors. The tive, deleted transitions should also be taken care of. Five
theory of generation of test cases from requirements with heuristics were formulated by Korel et al. (2007) for model
the help of GSE (Genetic Software Engineering) was pro- based testing. A model based TCP technique taking into
jected by Salem and Hassan (2011). GSE uses a visual account several object oriented features like inheritance,
semi-formal notation called BT(Behaviour Tree) to model polymorphism, aggregation etc. was proposed by Pani-
the requirements. Arafeen and Do (2013) evaluated whether grahi and Mall (2010). The authors proposed EOSDG
test cases can be clustered based on similarities found (Extended Object Oriented System Dependence Graph).
in requirement. The requirement clusters are prioritized Fig.5 indicates different steps of this technique.
and more test cases are selected from higher priority clus- A similarity based test case selection approach was pro-
ters.Table 10 represents a concise snapshot of the above posed by Hemmati et al. (2013).In similarity based selec-
papers. tion it is hypothesized that more diverse test cases have
Next, we would like to discuss some risk management higher fault revealing ability.Table 12 represents a concise
based prioritization strategies. Evaluating risk helps in snapshot of the above papers.
early damage prevention. Stallbaum et al. (2008) pro-
posed an automatic technique RiteDAP(Risk Based Test 5.7. Test Case Prioritization techniques for GUI/Web Based
Case Derivation And Prioritization) which generates test Applications
cases from Activity Diagrams and then prioritizes them
In the previous subsections, we have portrayed several
based on associated risk. Srivastava (2008) mentioned
strategies for Test Case Prioritization for C and JAVA pro-
that risk prone software elements should be tested ear-
grams. However, increasing usage of internet is making re-
lier. A new technique was structured by Yoon (2012) that
liable web applications on demand. Web applications run
measures risk exposure values of different risk items which
12
Table 10: Summary of Publications on Requirement Aware Prioritization Techniques

Authors and Year Prioritization Technique Results Subject Program

(Srikanth et al., The authors designed PORT 80% of defects got revealed in JAVA projects and an indus-
2005),(Srikanth and (Prioritization of Require- first 3 weeks. trial case study from IBM
Williams, 2005), (Srikanth ments for Testing) .
et al., 2013)
(Zhang et al., 2007) Changing testing require- New metric ’units-of- A series of simulation experi-
ment priority and varying testing-requirement-priority- ments were performed.
test case costs were consid- satisfied-per-unit-test-case-
ered. cost’ was built.
(Krishnamoorthi and Sa- A new system level TCP Two more factors, Complete- 5 J2EE application projects
haaya Arul Mary, 2009) technique similar to PORT ness and Traceability, were of size approximately 6000
was designed. proposed. LOC
(Salem and Hassan, 2011) Test cases were generated Genetic Software Engineering Microwave oven case study
from requirements with the was used to model the re-
help of GSE. quirements.
(Arafeen and Do, 2013) Test cases were clustered More test cases are selected Capstone (online examina-
based on similarities found in from higher priority clusters. tion) and iTrust (medical
requirement. record keeper)

Table 11: Summary of Publications on Risk Aware Prioritization Techniques

Authors and Year Prioritization Technique Results Subject Program

(Stallbaum et al., 2008) RiteDAP(Risk Based Test APDP(Average Percentage Program flow chart of the in-
Case Derivation And Priori- of Damage Prevented) was come tax calculation
tization). designed.
(Srivastva et al., 2008) Both requirement priority Risk prone software elements Sample case study to count
and risk exposure value were should be tested earlier. the frequency of a word in a
considered. file
(Yoon, 2012) RE(Risk Exposure) based RE(Risk Exposure) based All Siemens program
prioritization prioritization yielded 94%
APFD.

Table 12: Summary of Publications on Model based Prioritization Techniques

Authors and Year Prioritization Technique Results Subject Program

(Korel et al., 2005) Selective prioritization and Modified and Deleted transi- Three system models (ATM
model dependence based pri- tions should be taken care of. model, Cruise Control model
oritization were proposed. and Fuel Pump model)
(Korel et al., 2007), (Korel Several heuristics were for- Transition frequency or num- Two systems - ISDN and
et al., 2008) mulated for MBT. ber may not have positive in- TCP-Dialler.
fluence in early fault detec-
tion.
(Panigrahi and Mall, 2010) , Object oriented features were Almost 30% improvement in ATM, Library System, Ele-
(Panigrahi and Mall, 2014) taken into account. bug detection vator Controller and Vending
Machine
(Hemmati et al., 2013) A similarity based test case Test case diversity increases Subsystem of a video confer-
selection approach was fol- scalability of MBT. ence system, subsystem of a
lowed. safety critical system

13
Figure 4: Prioritization of Requirements for Testing(PORT)

Figure 5: Model based TCP technique using EOSDG

on web server and consists of several static (same content system may be - {Member Type= Student, Discount Sta-
for all users) and dynamic (content depends on user in- tus= High}. Sampath et al. (2008) proposed several prior-
put) web pages. While testing web applications, sequence itization strategies (frequency of appearance, coverage of
of events (clicking a button, opening a menu etc.) which parameter values etc.) after evaluating them with three
are performed by users are recorded. Web applications web applications . Table 13 shows the prioritization order
and Graphical User Interface are examples of EDS (Event (T3-T2-T1-T4) based on length of interaction . T3 covers
Driven Software).Large number of possible combination four interactions while T4 covers only one.
of events creates huge number of test cases for EDS and All the previous work considered GUI and web based
poses a challenge. Web application testing has the ad- application as separate object of study. Bryce et al. (2011)
vantage that user session data can be recorded and used proposed methods for testing web applications and GUI
as test data. In this subsection, we narrated 5 research applications together.Table 14 represents a concise snap-
publications which explored prioritization of Web applica- shot of the above papers.
tion/GUI application test cases.
Memon and Xie (2005) proposed a framework called 5.8. TCP techniques applied on real world systems
DART(Daily Automated Regression Tester) that retests While most of the conducted studies in the TCP do-
GUI applications frequently . An office suite TerpOffice main are based on seeded faults so far, it is necessary to
was developped by undergraduate students of University study the effect of TCP on real faults. Seeded faults are
of Maryland and was used as a subject for testing DART. easier to inject and may be available in large numbers while
TerpOffice has TerpWord, TerpSpreadSheet, TerpPaint, real regression faults might be handful and tough to lo-
TerpCalc and TerpPresent. Other than TerpSpreadSheet, cate. Elbaum et al. (2002) studied real time embedded
all the subjects showed large number of fault detection system QTB. However additional studies need to be done
with DART. The authors suggested a future work which for checking the applicability of TCP techniques in real
will make DART execute certain uncovered part of the domain. We have come across 4 more remarkable work
code. User session based testing of web applications was which studied real time systems with real faults.
introduced by Sprenkle et al. (2005). Converting usage Srivastava and Thiagarajan (2002) proved that TCP
data into test cases is called user session based testing. works for large systems (1.8 million LOC) by building Ech-
The authors proposed an approach called Concept which elon. Haymar and Hla (2008) used PSO(Particle Swarm
clusters user sessions that depicts similar use cases. Bryce Optimization) technique which prioritizes test cases based
and Memon (2007) proposed TCP by interaction cover- on their new best position in the test suite. The authors
age. An example of a pair wise interaction for a library applied PSO technique for real time embedded system and

14
Table 13: Test Case Interaction with Web pages

Test Cases Login.jsp Search.jsp Select.jsp Order.jsp Payment.jsp

T1 X X
T2 X X X
T3 X X X X
T4 X

Table 14: Summary of Publications on GUI/Web Application Focused Prioritization Techniques

Authors and Year Prioritization Technique Results Subject Program

(Memon and Xie, 2005) DART(Daily Automated Re- Other than TerpSpreadSheet, An office suite TerpOffice
gression Tester) framework all subjects showed large
was built. number of fault.
(Sprenkle et al., 2005) User session based testing of Concept analysis based re- Application Bookstore and
web applications was intro- duction has greater fault de- Course Project Manager
duced. tection rate.
(Bryce and Memon, 2007) Test case prioritization by in- 2-way prioritization showed Four GUI applications from
teraction coverage was intro- best APFD TerpOffice
duced.
(Sampath et al., 2008) Several strategies (frequency Found most of the faults at Web based Applications
of appearance, coverage of first 10% of tests executed. (Book, CPM, MASPLAS)
parameter) were followed.
(Bryce et al., 2011) Unified model for testing web 2-way based prioritization 4 GUI projects (TerpOf-
applications and GUI appli- showed promising results for fice) and 3 Web Applications
cations together was built. all cases. (Book, CPM, MASPLAS)

15
showed that 64% coverage can be achieved after running of WSDL (Web Service Description Language) document
10 test cases.Nardo et al. (2015) conducted another case were also used to order the test suite (Mei et al., 2009).
study with an industrial system (name withheld and men- Prioritizing test cases without coverage information is
tioned as NoiseGen) with 37 real regression faults. The becoming popular. Mei et al. introduced JUPTA (JUnit
study showed that modification information does not help Test Case Prioritization Techniques operating in the Ab-
in increasing fault detection rate. In 2016, a very inter- sence of coverage information) (Mei et al., 2012). In an-
esting study conducted by Lu et al. (2016) reflected a new other study, JUnit test cases were prioritized without using
thought process regarding prioritization approach . Very coverage information (Zhang et al., 2009b). Static black
few studies talk about test suite change. The authors con- box oriented TCP (Thomas et al., 2014) , string distance
cluded that expansion in test suite level did degrade the based TCP (Ledru et al., 2012) do not require code or spec-
efficiency of TCP techniques. Table 15 represents a concise ification detail and test suites are prioritized without the
snapshot of the above papers. execution of source code or specification models. Aggarwal
et al. formulated multiple parameter based prioritization
5.9. Other Test Case Prioritization Approaches model which actually originates from SRS (Software Re-
As we have detailed our representative publications quirement Specification)(Aggarwal et al., 2005).
from each avenue of test case prioritization techniques in In 2014, Rothermel et al. formulated unified strategy
the previous subsections, we would like to highlight some that combined total and additional (not yet covered) tech-
other distinct prioritization approaches. Search algorithm niques together (Hao et al., 2014). Fang et al. designed
based prioritization, ant colony optimization based priori- similarity based test case prioritization technique where
tization, clustering oriented prioritization, multi-objective the execution profiles of test cases were utilized. Test case
prioritization, and prioritizations without coverage infor- diversity yields better performance (Fang et al., 2014). In
mation are gaining interest. 2015, Epitropakis et al. clubbed three objectives average
Data flow information based testing (Rummel et al., percentage of coverage, average percentage of coverage of
2005) , user knowledge based ranking of test cases (Tonella changed code and average percentage of past fault cov-
et al., 2006), dynamic runtime behaviour based test case ered (Epitropakis et al., 2015). Marchetto et al. proposed
clustering (Yoo et al., 2009) are revealing more defects multi objective prioritization technique (Marchetto et al.,
than control flow based criteria. Test cases are also ranked 2016). Eghbali et al. developed a lexicographical ordering
based on classified events (Belli et al., 2007) and cluster- technique for breaking ties. Vector x is considered to have
ing approach (Chen et al., 2018) . Siavash Mirarab and higher lexicographical rank than Vector y if the first un-
Ladan Tahvildari built Bayesian Networks (BN) based pri- equal element of x and y has a greater value of x (Eghbali
oritization approach (Mirarab and Tahvildari, 2007). The and Tahvildari, 2016) .The method proved to be beneficial
prospect of CIT (Combinatorial Interaction Testing) ( Qu when more than one test case achieves the same coverage.
et al. 2007 ; Qu et al. 2008) and construction of call tree We have described the common prioritization mecha-
based prioritization approach (Smith et al., 2007) look very nisms (coverage/time/cost/history/model etc.) with their
promising as prioritized test suite takes 82% less time to methodology, subject programs, results and shortcomings
execute . P R Srivastava applied Non homogeneous Pois- in the previous subsections. However, Test Case Prioritiza-
son process for optimizing testing time window (Srivas- tion is dynamically evolving and that is, why we have sum-
tava, 2008). marized these other upcoming prioritization viewpoints.
Z. Li et al. applied meta heuristic and evolutionary
algorithms (Li et al., 2007) for TCP. S. Li et al. con-
6. Conclusion and Future Work
ducted simulation studies on Search Algorithms for test
case prioritization. Five search algorithms (Total Greedy, As Test Driven Developments (TDD) are generating
Additional Greedy, 2- Optimal Greedy, Hill Climbing and great profits, testing is not considered as an overhead ac-
Genetic Algorithms) were studied (Li et al., 2010). Conrad tivity anymore. The test case prioritization techniques ac-
et al. built a new framework called GELATIONS (GEnetic tually help in orderly execution of test cases based on some
aLgorithm bAsed Test suIte priOritizatioN System) (Con- performance or goal function (coverage/time/cost etc.).
rad et al., 2010). A study to compare the effectiveness We have read all the chosen 90 papers in full length before
of search based and greedy prioritizers were conducted preparing this review. We would like to conclude:
(Williams and Kapfhammer, 2010). ACO (Ant Colony
Optimization) technique can also be used for prioritizing 1. Even though APFD(Average Percentage of Faults
test cases (Singh et al. 2010 ; Agrawal and Kaur 2018). Detected) is the most used metric for prioritization,
Dennis Jeffrey and Neelam Gupta prioritized test cases Savings Factor, Coverage Effectiveness , AP F DC
based on coverage of requirements in the relevant slices of (APFD per cost), NAPFD (Normalized APFD) etc.
outputs of each test case. The study showed that if a test are also considered as valuable metrics .
case is traversing a modification, it will not necessarily 2. The SIR(Software Artifact Infrastructure Repository)
expose a fault (Jeffrey and Gupta, 2008) . ART (Adap- library hosts several recurrently used subject pro-
tive Random Testing) (Jiang et al., 2009) and xml tags grams (tcas, schedule, replace etc.) for TCP stud-
16
Table 15: Summary of Publications on Prioritization Techniques applied for Real World System

Authors and Year Prioritization Technique Results Subject Program

(Srivastava and Thiagarajan, The weight of a test is equal TCP works for large systems Two versions of a production
2002) to the number of impacted (1.8 million LOC). program
blocks it covers.
(Haymar and Hla, 2008) PSO(Particle Swarm Opti- 64 % coverage can be Real time embedded system
mization) technique was pro- achieved after running 10
posed. test cases.
(Nardo et al., 2015) The behaviour of real regres- Modification information NoiseGen of 59-73 KLOC
sion faults were studied. does not help.
(Lu et al., 2016) The study talks about test Expansion in test suite level 8 real world Java projects
suite augmentation. degrade the efficiency.

ies. UNIX utility programs(grep, flex), JAVA pro- binatorial testing helps in reducing the number of
grams(ant, jmeter) and several other case study ap- test cases, it is mostly limited to small values of
plications are also selected in repeated endeavour. t. Henard et al. (2014) proposed a similarity based
3. Coverage aware prioritization methods are dominant approach which can be an alternative to t-wise ap-
while requirement based methods are second best proach. In context of this, realistic approaches for
one. Model based TCP and search algorithm based large SPLs should be explored more.
TCP is also finding more attention nowadays. 6. Bertolino et al. (2015) opened up a new angle ad-
dressing TCP with access control.They indicated sim-
We would also like to recommend several scope for fu-
ilarity criteria is useful for prioritization of access
ture work -
control tests. To the best of our knowledge, we do
1. Many researchers have addressed the effect of source not see any other study exploring this view. So fu-
code modification on test case prioritization, how- ture work might investigate access control test case
ever the study regarding the alteration(augmentation, prioritization in depth.
deletion etc.) in test suite level needs more explo-
ration. Prioritization techniques for ordering multi- We hope this article will be beneficial for both begin-
ple test suites can also be designed. ners and experienced professionals who have far-reaching
2. Even though TCP techniques without coverage in- interest in TCP domain. We can confide this is the first
formation has shown promising results, it is still not review with a thorough report of the last 18 years of TCP
that popular. Acquiring coverage information is te- techniques.
dious and costly in many cases. In this context, tech-
niques without coverage information (Petke et al. References
2015;Parejo et al. 2016) should be researched more.
Aggarwal, K.K., Singh, Y., Kaur, A., 2005. A multiple parameter
3. Luo et al. (2016) indicated a combination of static
test case prioritization model. Journal of Statistics and Manage-
techniques (includes source code intervention) and ment Systems 8, 369–386.
dynamic techniques (uses run time execution data) Agrawal, A.P., Kaur, A., 2018. A Comprehensive Comparison of
might be more beneficial for prioritizing test cases. Ant Colony and Hybrid Particle Swarm Optimization Algorithms
Through Test Case Selection , 397–405.
However it is an open end which combination works Alspaugh, S., Walcott, K.R., Belanich, M., Kapfhammer, G.M.,
best. Among the static techniques call graph based Soffa, M.L., 2007. Efficient time-aware prioritization with knap-
method, topic model based method are popular. Total- sack solvers. Proc. - 1st ACM Int. Workshop on Empirical As-
additional strategy, search based approach, adaptive sessment of Software Engineering Languages and Technologies,
WEASELTech 2007, Held with the 22nd IEEE/ACM Int. Conf.
random testing (ART) are common dynamic tech- Automated Software Eng., ASE 2007 , 13–18.
niques. Researchers might delve into this fact that Arafeen, M.J., Do, H., 2013. Test case prioritization using
which combination of these above mentioned tech- requirements-based clustering. Proceedings - IEEE 6th Interna-
niques work best. tional Conference on Software Testing, Verification and Valida-
tion, ICST 2013 , 312–321.
4. Researchers have also indicated TCP based on pro- Belli, F., Eminov, M., Gökçe, N., 2007. A Fuzzy Clustering Approach
gram change information outperforms static or dy- and Case Study 1 Introduction : Motivation and Related Work.
namic techniques (Saha et al., 2015).This scenario Dependable Computing , 95–110.
Bertolino, A., Daoudagh, S., El Kateb, D., Henard, C., Le Traon,
can open an entire new avenue which will be benefi-
Y., Lonetti, F., Marchetti, E., Mouelhi, T., Papadakis, M., 2015.
cial for large regression test suite. Similarity testing for access control. Information and Software
5. Further exploration is needed for large software prod- Technology 58, 355–372.
uct line(SPL) related TCP. Even though t-wise com-
17
Bryce, R.C., Memon, A.M., 2007. Test suite prioritization by in- Be Optimal or Not in Test-Case Prioritization. IEEE Transactions
teraction coverage. Workshop on Domain specific approaches to on Software Engineering 42, 490–504.
software test automation in conjunction with the 6th ESEC/FSE Hao, D., Zhang, L., Zhang, L., Rothermel, G., Mei, H., 2014. A
joint meeting - DOSTA ’07 , 1–7. Unified Test Case Prioritization Approach. ACM Trans. Softw.
Bryce, R.C., Sampath, S., Memon, A.M., 2011. Developing a single Eng. Methodol. 24, 10:1—-10:31.
model and test prioritization strategies for event-driven software. Haymar, K., Hla, S., 2008. Applying Particle Swarm Optimization to
IEEE Transactions on Software Engineering 37, 48–64. Prioritizing Test Cases for Embedded Real Time Software Retest-
Catal, C., Mishra, D., 2013. Test case prioritization: A systematic ing YoungSik Choi 3 . Particle Swarm Optimization Technique
mapping study. Software Quality Journal 21, 445–478. in Test Case Prioritization This study applies the particle swarm
Chen, J., Zhu, L., Yueh, T., Towey, D., Kuo, F.c., Huang, R., 2018. optimization , 527–532.
The Journal of Systems and Software Test case prioritization for Hemmati, H., Arcuri, A., Briand, L., 2013. Achieving scalable model-
object-oriented software : An adaptive random sequence approach based testing through test case diversity. ACM Transactions on
based on clustering R 135, 107–125. Software Engineering and Methodology 22, 1–42.
Chittimalli, P.K., Harrold, M.J., 2009. Recomputing coverage infor- Henard, C., Papadakis, M., Harman, M., Jia, Y., Traon, Y.L., 2016.
mation to assist regression testing. IEEE Transactions on Software Comparing White-Box and Black-Box Test Prioritization. 2016
Engineering 35, 452–469. IEEE/ACM 38th International Conference on Software Engineer-
Conrad, A.P., Roos, R.S., Kapfhammer, G.M., 2010. Empirically ing (ICSE) , 523–534.
studying the role of selection operators duringsearch-based test Henard, C., Papadakis, M., Perrouin, G., Klein, J., Heymans, P.,
suite prioritization. Proceedings of the 12th annual conference on Traon, Y.L., 2014. Bypassing the combinatorial explosion: Using
Genetic and evolutionary computation - GECCO ’10 , 1373. similarity to generate and prioritize t-wise test configurations for
Do, H., Mirarab, S., 2010. The Effects of Time Constraints on Test software product lines. IEEE Transactions on Software Engineer-
Case Prioritization : A Series of Controlled Experiments 36, 593– ing 40, 650–670. arXiv:1211.5451v1.
617. Huang, Y.C., Peng, K.L., Huang, C.Y., 2012. A history-based cost-
Do, H., Rothermel, G., 2006. On the use of mutation faults in em- cognizant test case prioritization technique in regression testing.
pirical assessments of test case prioritization techniques. IEEE Journal of Systems and Software 85, 626–637.
Transactions on Software Engineering 32, 733–752. Jeffrey, D., Gupta, N., 2008. Experiments with test case prioritiza-
Do, H., Rothermel, G., Kinneer, A., 2006. Prioritizing JUnit test tion using relevant slices. Journal of Systems and Software 81,
cases: An empirical assessment and cost-benefits analysis. Empir- 196–221.
ical Software Engineering 11, 33–70. Jiang, B., Zhang, Z., Chan, W.K., Tse, T.H., 2009. Adaptive random
Eghbali, S., Tahvildari, L., 2016. Test Case Prioritization Using test case prioritization. ASE2009 - 24th IEEE/ACM International
Lexicographical Ordering. IEEE Transactions on Software Engi- Conference on Automated Software Engineering , 233–244.
neering 42, 1178–1195. Jones, J.A., Harrold, M.J., 2001. Test-suite reduction and prioritiza-
Elbaum, S., Kallakuri, P., Malishevsky, A., Rothermel, G., Kan- tion for modified condition/decision coverage. IEEE International
duri, S., 2003. Understanding the effects of changes on the cost- Conference on Software Maintenance, ICSM 29, 92–103.
effectiveness of regression testing techniques. Software Testing Juristo, N., Moreno, A.M., Vegas, S., 2004. Reviewing 25 Years of
Verification and Reliability 13, 65–83. Testing Technique Experiments. Empirical Software Engineering
Elbaum, S., Malishevsky, A., Rothermel, G., 2002. Test case pri- 9, 7–44. arXiv:1112.2903v1.
oritization: a family of empirical studies. IEEE Transactions on Kapfhammer, G.M., Soffa, M.L., 2007. Using coverage effectiveness
Software Engineering 28, 159–182. to evaluate test suite prioritizations. Proceedings of the 1st ACM
Elbaum, S., Rothermel, G., Kanduri, S., 2004. Selecting a Cost- international workshop on Empirical assessment of software engi-
Effective Test Case Prioritization , 185–210. neering languages and technologies held in conjunction with the
Elbaum, S., Rothermel, G., Penix, J., 2014. Techniques for improv- 22nd IEEE/ACM International Conference on Automated Soft-
ing regression testing in continuous integration development envi- ware Engineering (ASE) 2007 - WEASELTech ’07 , 19–20.
ronments. Proceedings of the 22nd ACM SIGSOFT International Kim, J.M., Porter, A., 2002. A history-based test prioritization tech-
Symposium on Foundations of Software Engineering - FSE 2014 , nique for regression testing in resource constrained environments.
235–245. Proceedings of the 24th international conference on Software en-
Epitropakis, M.G., Yoo, S., Harman, M., Burke, E.K., 2015. Em- gineering - ICSE ’02 , 119.
pirical evaluation of pareto efficient multi-objective regression test Kim, S., Baik, J., 2010. An effective fault aware test case prioritiza-
case prioritisation. Proceedings of the 2015 International Sympo- tion by incorporating a fault localization technique. Proceedings
sium on Software Testing and Analysis - ISSTA 2015 , 234–245. of the 2010 ACM-IEEE International Symposium on Empirical
Erdogmus, H., Morisio, M., Torchiano, M., 2005. On the effectiveness Software Engineering and Measurement - ESEM ’10 , 1.
of the test-first approach to programming. IEEE Transactions on Korel, B., Koutsogiannakis, G., Tahat, L.H., 2007. Model-based test
Software Engineering 31, 226–237. prioritization heuristic methods and their evaluation. Proceedings
Fang, C., Chen, Z., Wu, K., Zhao, Z., 2014. Similarity-based test of the 3rd international workshop on Advances in model-based
case prioritization using ordered sequences of program entities. testing - A-MOST ’07 , 34–43.
Software Quality Journal 22, 335–361. Korel, B., Koutsogiannakis, G., Tahat, L.H., 2008. Application of
Fang, C.R., Chen, Z.Y., Xu, B.W., 2012. Comparing logic cover- system models in regression test suite prioritization. 2008 IEEE
age criteria on test case prioritization. Science China Information International Conference on Software Maintenance , 247–256.
Sciences 55, 2826–2840. Korel, B., Tahat, L.H., Harman, M., 2005. Test prioritization us-
Fazlalizadeh, Y., Khalilian, A., Abdollahi Azgomi, M., Parsa, S., ing system models. IEEE International Conference on Software
2009. Prioritizing test cases for resource constraint environments Maintenance, ICSM 2005, 559–568.
using historical test case performance data. Proceedings - 2009 Krishnamoorthi, R., Sahaaya Arul Mary, S.A., 2009. Factor oriented
2nd IEEE International Conference on Computer Science and In- requirement coverage based system test case prioritization of new
formation Technology, ICCSIT 2009 , 190–195. and regression test cases. Information and Software Technology
Feldt, R., Poulding, S., Clark, D., Yoo, S., 2016. Test Set Diameter: 51, 799–808.
Quantifying the Diversity of Sets of Test Cases. Proceedings - 2016 Ledru, Y., Petrenko, A., Boroday, S., Mandran, N., 2012. Prioritizing
IEEE International Conference on Software Testing, Verification test cases with string distances. Automated Software Engineering
and Validation, ICST 2016 , 223–2331506.03482. 19, 65–95.
Hao, D., Zhang, L., Mei, H., 2016a. Test-case prioritization : achieve- Leon, D., Podgurski, A., 2003. A comparison of coverage-based
ments and challenges 10, 769–777. and distribution-based techniques for filtering and prioritizing test
Hao, D., Zhang, L., Zang, L., Wang, Y., Wu, X., Xie, T., 2016b. To cases. Proceedings - International Symposium on Software Relia-

18
bility Engineering, ISSRE 2003-Janua, 442–453. regression testing: An Empirical Study of Sampling and Prior-
Li, S., Bian, N., Chen, Z., You, D., He, Y., 2010. A Simulation Study itization. Proceedings of the 2008 international symposium on
on Some Search Algorithms for Regression Test Case Prioritiza- Software testing and analysis - ISSTA ’08 , 75.
tion. 2010 10th International Conference on Quality Software , Qu, X., Cohen, M.B., Woolf, K.M., 2007. Combinatorial interaction
72–81. regression testing: A study of test case generation and prioriti-
Li, Z., Harman, M., Hierons, R.M., 2007. Search algorithms for zation. IEEE International Conference on Software Maintenance,
regression test case prioritization. IEEE Transactions on Software ICSM , 255–264.
Engineering 33, 225–237. Rothermel, G., Harrold, M., 1996. Analyzing regression test selection
Lu, Y., Lou, Y., Cheng, S., Zhang, L., Hao, D., Zhou, Y., Zhang, techniques. IEEE Transactions on Software Engineering 22, 529–
L., 2016. How does regression test prioritization perform in real- 551.
world software evolution? Proceedings of the 38th International Rothermel, G., Harrold, M., Ostrin, J., Hong, C., . An empirical
Conference on Software Engineering - ICSE ’16 , 535–546. study of the effects of minimization on the fault detection capa-
Luo, Q., Moran, K., Poshyvanyk, D., 2016. A Large-Scale Empir- bilities of test suites. Proceedings. International Conference on
ical Comparison of Static and Dynamic Test Case Prioritization Software Maintenance (Cat. No. 98CB36272) , 34–43.
Techniques. Proceedings of the 2016 24th ACM SIGSOFT In- Rothermel, G., Untcn, R.H., Chu, C., Harrold, M.J., 2001. Pri-
ternational Symposium on Foundations of Software Engineering , oritizing test cases for regression testing. IEEE Transactions on
559–570. Software Engineering 27, 929–948.
Malishevsky, A.G., Rothermel, G., Elbaum, S., 2002. Modeling the Rummel, M.J., Kapfhammer, G.M., Thall, A., 2005. Towards the
cost-benefits tradeoffs for regression testing techniques. Software prioritization of regression test suites with data flow information.
Maintenance, 2002. Proceedings. International Conference on , Proceedings of the 2005 ACM symposium on Applied computing
204–213. - SAC ’05 , 1499.
Malishevsky, A.G., Ruthruff, J.R., Rothermel, G., Elbaum, S., 2006. Saff, D., Ernst, M.D., 2003. Reducing wasted development time via
Cost-cognizant Test Case Prioritization. Department of Computer continuous testing. Proceedings - International Symposium on
Science and Engineering University of NebraskaLincoln Techical Software Reliability Engineering, ISSRE 2003-Janua, 281–292.
Report , 1–41. Saha, R.K., Zhang, L., Khurshid, S., Perry, D.E., 2015. An infor-
Marchetto, A., Islam, M.M., Asghar, W., Susi, A., Scanniello, G., mation retrieval approach for regression test prioritization based
2016. A Multi-Objective Technique to Prioritize Test Cases. IEEE on program changes. Proceedings - International Conference on
Transactions on Software Engineering 42, 918–940. Software Engineering 1, 268–279.
Mei, H., Hao, D., Zhang, L., Zhang, L., Zhou, J., Rothermel, G., Salem, Y.I., Hassan, R., 2011. Requirement-based test case gen-
2012. A static approach to prioritizing JUnit test cases. IEEE eration and prioritization. ICENCO’2010 - 2010 International
Transactions on Software Engineering 38, 1258–1275. Computer Engineering Conference: Expanding Information So-
Mei, L., Chan, W.K., Tse, T.H., Merkel, R.G., 2009. Tag-based ciety Frontiers , 152–157.
techniques for black-box test case prioritization for service testing. Sampath, S., Bryce, R.C., Viswanath, G., Kandimalla, V., Koru,
Proceedings - International Conference on Quality Software , 21– A.G., 2008. Prioritizing user-session-based test cases for web ap-
30. plications testing. Proceedings of the 1st International Confer-
Memon, A., Gao, Z., Nguyen, B., Dhanda, S., Nickell, E., Siem- ence on Software Testing, Verification and Validation, ICST 2008
borski, R., Micco, J., 2017. Taming google-scale continuous test- , 141–150.
ing. Proceedings - 2017 IEEE/ACM 39th International Conference Singh, Y., Kaur, A., Suri, B., 2010. Test case prioritization using
on Software Engineering: Software Engineering in Practice Track, ant colony optimization. ACM SIGSOFT Software Engineering
ICSE-SEIP 2017 , 233–242. Notes 35, 1.
Memon, A.M., Xie, Q., 2005. Studying the fault-detection effec- Smith, A., Geiger, J., Kapfhammer, G.M., Soffa, M.L., 2007. Test
tiveness of GUI test cases for rapidly evolving software. IEEE suite reduction and prioritization with call trees. Proceedings of
Transactions on Software Engineering 31, 884–896. the twenty-second IEEE/ACM international conference on Auto-
Mirarab, S., Tahvildari, L., 2007. A Prioritization Approach for mated software engineering - ASE ’07 , 539.
Software. Test , 276–290. Smith, A.M., Kapfhammer, G.M., 2009. An empirical study of in-
Mogyorodi, G., 2001. Requirements-Based Testing : An Overview corporating cost into test suite reduction and prioritization. Pro-
CaliberRBT / Mercury Interactive Integration Overview. Integra- ceedings of the 2009 ACM symposium on Applied Computing -
tion The Vlsi Journal , 286–295. SAC ’09 1, 461.
Nardo, D.D., Alshahwan, N., Briand, L., Labiche, Y., 2015. Sprenkle, S., Gibson, E., Pollock, L., Souter, a., 2005. An empiri-
Coverage-based regression test case selection, minimization and cal comparison of test suite reduction techniques for user-session-
prioritization: a case study on an industrial system. Software based testing of Web applications. 21st IEEE International Con-
Testing, Verification and Reliability 25, 371396. ference on Software Maintenance (ICSM’05) , 587–596.
Panigrahi, C.R., Mall, R., 2010. Model-based regression test case Srikanth, H., Banerjee, S., Williams, L., Osborne, J., 2013. Towards
prioritization. ACM SIGSOFT Software Engineering Notes 35, 1. the prioritization of system test cases. Software Testing, Verifica-
Panigrahi, C.R., Mall, R., 2014. A heuristic-based regression test tion and Reliability 24, 320337.
case prioritization approach for object-oriented programs. Inno- Srikanth, H., Cohen, M.B., Qu, X., 2009. Reducing field failures in
vations in Systems and Software Engineering 10, 155–163. system configurable software: Cost-based prioritization. Proceed-
Parejo, J.A., Sánchez, A.B., Segura, S., Ruiz-Cortés, A., Lopez- ings - International Symposium on Software Reliability Engineer-
Herrejon, R.E., Egyed, A., 2016. Multi-objective test case pri- ing, ISSRE , 61–70.
oritization in highly configurable systems: A case study. Journal Srikanth, H., Williams, L., 2005. On the economics of requirements-
of Systems and Software 122, 287–310. based test case prioritization. ACM SIGSOFT Software Engineer-
Park, H., Ryu, H., Baik, J., 2008. Historical value-based approach for ing Notes 30, 1.
cost-cognizant test case prioritization to improve the effectiveness Srikanth, H., Williams, L., Osborne, J., 2005. System test case pri-
of regression testing. Proceedings - The 2nd IEEE International oritization of new and regression test cases. 2005 International
Conference on Secure System Integration and Reliability Improve- Symposium on Empirical Software Engineering, ISESE 2005 00,
ment, SSIRI 2008 , 39–46. 64–73.
Petke, J., Cohen, M.B., Harman, M., Yoo, S., 2015. Practical Com- Srivastava, A., Thiagarajan, J., 2002. Effectively prioritizing tests in
binatorial Interaction Testing: Empirical Findings on Efficiency development environment. ACM SIGSOFT Software Engineering
and Early Fault Detection. IEEE Transactions on Software Engi- Notes 27, 97.
neering 41, 901–924. Srivastava, P.R., 2008. Model for optimizing software testing period
Qu, X., Cohen, M.B., Rothermel, G., 2008. Configuration-aware using non homogenous Poisson process based on cumulative test

19
case prioritization. IEEE Region 10 Annual International Confer-
ence, Proceedings/TENCON , 1–6.
Srivastva, P.R., Kumar, K., Raghurama, G., 2008. Test case priori-
tization based on requirements and risk factors. ACM SIGSOFT
Software Engineering Notes 33, 1.
Stallbaum, H., Metzger, A., Pohl, K., 2008. An automated tech-
nique for risk-based test case generation and prioritization. . . . on
Automation of software test , 67.
Thomas, S.W., Hemmati, H., Hassan, A.E., Blostein, D., 2014. Static
test case prioritization using topic models. volume 19.
Tonella, P., Avesani, P., Susi, A., 2006. Using the case-based rank-
ing methodology for test case prioritization. IEEE International
Conference on Software Maintenance, ICSM , 123–132.
Uusitalo, E.J., Komssi, M., Kauppinen, M., Davis, A.M., 2008. Link-
ing requirements and testing in practice. Proceedings of the 16th
IEEE International Requirements Engineering Conference, RE’08
, 265–270.
Walcott, K.R., Soffa, M.L., Kapfhammer, G.M., Roos, R.S., 2006.
TimeAware test suite prioritization. Proceedings of the 2006 inter-
national symposium on Software testing and analysis - ISSTA’06
, 1.
Williams, Z.D., Kapfhammer, G.M., 2010. Using Synthetic Test
Suites to Empirically Compare Search-based and Greedy Priori-
tizers. Proceedings of the 12th Annual Conference on Genetic and
Evolutionary Computation (GECCO ’10) , 2119–2120.
Wong, W.E., Morgan, J.R., London, S., Mathur, A.P., 1998. Effect
of test set minimization on fault detection effectiveness. Software
- Practice and Experience 28, 347–369.
Yoo, S., Harman, M., 2007. Regression Testing Minimisation, Selec-
tion and Prioritisation : A Survey. Test. Verif. Reliab 00, 1–7.
Yoo, S., Harman, M., Tonella, P., Susi, A., 2009. Clustering test
cases to achieve effective and scalable prioritisation incorporating
expert knowledge. Proc. ISSTA , 201–212.
Yoon, M., 2012. A Test Case Prioritization through Correlation
of Requirement and Risk. Journal of Software Engineering and
Applications 05, 823–836.
You, D., Chen, Z., Xu, B., Luo, B., Zhang, C., 2011. An empirical
study on the effectiveness of time-aware test case prioritization
techniques. Proceedings of the 2011 ACM Symposium on Applied
Computing , 1451–1456.
Zhang, L., Hou, S.S., Guo, C., Xie, T., Mei, H., 2009a. Time-aware
test-case prioritization using integer linear programming. Pro-
ceedings of the eighteenth international symposium on Software
testing and analysis - ISSTA ’09 , 401–419.
Zhang, L., Zhou, J., Hao, D., Zhang, L., Mei, H., 2009b. Jtop:
Managing JUnit test cases in absence of coverage information.
ASE2009 - 24th IEEE/ACM International Conference on Auto-
mated Software Engineering , 677–6797.
Zhang, X., Nie, C., Xu, B., Qu, B., 2007. Test case prioritization
based on varying testing requirement priorities and test case costs.
Proceedings - International Conference on Quality Software , 15–
24.

20

You might also like