Accepted Manuscript: Journal of King Saud University - Computer and In-Formation Sciences
Accepted Manuscript: Journal of King Saud University - Computer and In-Formation Sciences
PII: S1319-1578(18)30361-6
DOI: https://doi.org/10.1016/j.jksuci.2018.09.005
Reference: JKSUCI 501
Please cite this article as: Mukherjee, R., Sridhar Patnaik, K., A Survey on Different Approaches for Software Test
Case Prioritization, Journal of King Saud University - Computer and Information Sciences (2018), doi: https://
doi.org/10.1016/j.jksuci.2018.09.005
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A Survey on Different Approaches for Software Test Case Prioritization
Abstract
Testing is the process of evaluating a system by manual or automated means. While Regression Test Selection(RTS)
discards test cases and Test Suite Minimization(TSM) shows diminution in fault detection rate, Test Case Prioritiza-
tion(TCP) does not discard test cases. Test Case Prioritization techniques can be coverage or historical information
based or model based. It can also be cost-time aware or requirement-risk aware. GUI/Web applications need special
prioritization mechanism. In this paper, 90 scholarly articles ranging from 2001 to 2018 have been reviewed. We have
explored IEEE, Wiley, ACM Library, Springer, Taylor & Francis and Elsevier database. We have also described each
prioritization method with their findings and subject programs. This paper includes a chronological catalogue listing of
the reviewed papers. We have framed three research questions which sum up the frequently used prioritization metrics,
regularly used subject programs and the distribution of different prioritization techniques. To the best of our knowledge,
this is the first review with a detail report of the last 18 years of TCP techniques. We hope this article will be beneficial
for both beginners and seasoned professionals.
Keywords: regression, prioritization, techniques, program, fault, coverage
3
sights about the frequently used metric for prioritization,
commonly used subject programs for study, prominent re-
searchers in this field etc.
4. Research Methodology
RQ1: Which metrics are used often for TCP? This RQ will help in rebuilding the existing metrics while
proposing new one.
RQ2: What are the frequently used subject programs in New researchers in this field will find it easier to replicate
prioritization studies? the existing studies with these subject programs. If a new
method is proposed it can also be verified using these sub-
jects.
RQ3: Which prioritization methods are commonly ex- This RQ will give an idea about which techniques can be
plored and what are their proportion? explored more.
APFD Average Percentage of Fault Detected APFD Value ranges between 0 to 100 where higher value
indicates better fault detection.
AP F DC APFD per cost Units of fault severity detected per unit of test cost is cal-
culated.
NAPFD Normalized APFD This metric considers both fault detection and time of de-
tection.
ASFD Average Severity of Faults Detected ASFD for requirement i is the ratio of the summation of
severity values of faults detected for that requirement di-
vided by TSFD(Total Severity of Fault Detected).
RP Most Likely Relative Position An average relative position of the first failed test case that
finds a defect is used. This metric is primarily used for
model based TCP.
APDP Average Percentage of Damage Pre- APDP is used to measure the effectiveness of Risk Based
vented Test Case Derivation and Prioritization.
5
Table 3: Subject Programs Used by Several Prioritization Studies
tcas (138LOC), schedule2 (297LOC), These are 8 C Programs from SIR(Software Artifact Infrastructure
schedule (299 LOC), tot info (346 Repository) which is publicly available . First 7 are called Siemens
LOC), print tokens (402 LOC), Program and the eight program is a program from European Space
print tokens2 (483 LOC), replace (516 Agency.
LOC), space (6218 LOC)
GradeBook , JDepend Case Study Application. Performs grading and Generates metrics.
detection rate (Rothermel et al., 2001). Nine prioritiza- test cases for the first time, this study also added a new
tion techniques were applied on eight C programs from viewpoint in terms of time and money. If the prioritiza-
SIR(Software Artifact Infrastructure Repository) . The tion method involves feedback strategy then the cost of
most insightful outcome of this study revealed that for the prioritization increases. However this study neither con-
first seven programs (called as Siemens Program) the total sidered other test suites nor it incorporated object oriented
techniques outperformed the additional technique but for features( inheritance, encapsulation, polymorphism etc.)
the eighth program, Space, which is a real and huge pro- while testing Java programs. We would like to mention
gram from European Space agency , the additional tech- another experimentation executed by Do and Rothermel
niques performed better. The additional techniques are (2006) which studied the same open source Java Programs
called feedback or greedy strategy as they iteratively first of the previous work with mutation faults. This study
selects a test case of highest coverage, then adjusts the eliminated the shortcoming of the previous work by consid-
coverage details of the remaining test cases for elements ering TSL (Test Specification Language) test suite.In an-
(statement/branch etc) not yet covered. The shortcom- other research conducted in 2007, Kapfhammer and Soffa
ing of this study was that it considered all faults of same , introduced a metric CE(Coverage Effectiveness) for or-
severity and did not incorporate cost factor. In continu- dering test suites.If test requirements are fulfilled quickly
ation with Rothermel’s study, Elbaum et al. conducted then a high CE is achieved (Kapfhammer and Soffa, 2007).
another empirical assessment which classifies the prioriti- Fang et al. (2012) executed another experimentation which
zation techniques as coarse granularity(function level tech- focused on the term logic coverage testing . Branch cover-
niques) and fine granularity(statement level techniques) age, modified condition/decision coverage are several logic
(Elbaum et al., 2002). The concept of SF(Savings Factor) coverage testing method. Logic coverage is widely ac-
was introduced in this work. SF is a metric that trans- cepted for safety - critical software. The authors coined
lates APFD measure into benefit scale. SF 1000 indicates the term CI(Convergence Index) which is used to decide
1% APFD gain creates a savings of 1000 dollars. The when to stop testing.A quick convergence indicates low
study concludes that the fine granularity techniques out- testing cost and rapid fault detection rate.
performs the coarse granularity techniques by a very little In almost all the previous studies, the authors men-
edge. This marginal gain is not beneficial as for larger tioned about significant performance margin between pri-
systems statement level techniques are too tedious and oritized orderings and optimal orderings. However, we
expensive while function level techniques are less expen- would like to highlight a recent work by Hao et al. (2016a)
sive. Jones and Harrold (2001) for the first time applied which implemented Optimal coverage based prioritization
Test Case Prioritization techniques on MC/DC test suite . technique by ILP(Integer Linear Programming) and ruled
The authors proposed a build up technique(test cases are out that fact .The study concluded that the Optimal tech-
getting added to an empty test suite) and a break down nique is notably worse than additional technique in terms
technique (removing low contribution test cases) by iden- of fault detection rate or execution time. Table 4 repre-
tifying essential and redundant test cases. sents a concise snapshot of the above papers.
All of the previous studies mainly used C programs Though there is a logical establishment that greater
for implementing the prioritization prototypes. We would coverage indicates greater effectiveness, coverage is just
like to bring up the work conducted by Do et al. (2006) one parameter for prioritizing test cases. A maximum cov-
which for the first time focused on prioritizing JUnit test erage does not always ensure that all faults will be covered.
cases.For JUnit framework , test cases were categorized As the main goal of testing is to ensure the delivery of a
into two segments - test cases at test class stage and test quality product within a stipulated time and budget, TCP
cases at test method stage. Apart from analyzing JUnit techniques should be time, cost and requirement centric.
7
Table 4: Summary of Publications on Coverage Aware Prioritization Techniques
(Elbaum et al., Concept of coarse granular- Fine granularity Real time embed-
2002) ity(function level), fine gran- techniques outper- ded system QTB,
ularity (statement level) were forms the coarse UNIX utilities flex
introduced. granularity tech- and grep
niques by a very
little edge.
(Do et al., Coverage based techniques In some cases (ant Four Java programs
2006) were applied for JUnit test and xml-security) - ant, jmeter, xml-
cases for the first time. non-control tech- security and jtopas
niques performed
better.
(Do and Use of Mutation faults were For ant the non- Two Java
Rothermel, studied. control techniques programs- galileo
2006) performed better and nanoxml in
while for jtopas TSL test suite
it showed no and Four Java
improvement. programs ant, xml-
security, jmeter,
jtopas in JUnit
(Fang et al., The concept of logic coverage Quick convergence 3 Java Programs
2012) and convergence index were is necessary for low - Tcas, NanoXML,
introduced. cost and rapid fault Ant
detection.
(Hao et al., ILP was used to implement Optimal method is Siemens and Space
2016b) Optimal Coverage based pri- worse than addi- program, 2 Java
oritization. tional technique. programs ( jtopas
and siena)
8
Table 5: Test case Selection Parameters severity data. Fazlalizadeh et al. (2009) mentioned that
finding an optimal execution order of test cases does not
Variables Description
have any deterministic solution. Kim and Baik (2010) pro-
posed FATCP (Fault Aware Test Case Prioritization) that
TC Test Case utilized historical fault information by using fault local-
T Time instant ization technique. Fault Localization Technique is actu-
Htc Time ordered observations ally a debugging activity which points out the location
{h1 , h2 , . . . , ht } obtained from previous of a fault or a set of faults in a program.FATCP outper-
run formed branch coverage and statement coverage prioriti-
α Weight factor for Individual Observa- zation methods for most of the Siemens program. Huang
tion et al. (2012) proposed MCCTCP(Modified Cost Cognizant
• Lower value indicates older ob- TCP) based on test case execution history. MCCTCP
servation does not need analysis of source code and it feeds histor-
• Higher value indicates recent ob- ical information of each test case to a GA(genetic algo-
servation
rithm).Table 6 represents a concise snapshot of the above
papers.
Ptc , t(Htc , α) Selection Probability of each test case
9
Table 6: Summary of Publications on History based Prioritization Techniques
(Kim and Porter, 2002) A TCP method using histor- The authors assign a selection 8 C Program (Siemens and
ical information about test probability for each test case. Space)
case was proposed.
(Park et al., 2008) A Historical Value-Based Test case execution cost and 8 versions of open source Java
technique with a cost centric fault severity are important program ant.
approach was proposed. factor.
(Fazlalizadeh et al., 2009) The priority of a test case in Optimal execution of test 8 C Program (Siemens and
previous regression test ses- cases do not have determin- Space)
sion was considered. istic solution.
(Kim and Baik, 2010) FATCP (Fault Aware Test FATCP outperformed branch 8 C Program (Siemens and
Case Prioritization) was coverage and statement cov- Space)
used. erage method.
(Huang et al., 2012) MCCTCP(Modified Cost Improvement in fault detec- Two UNIX utilitis
Cognizant TCP) was utilized tion was observed. • sed
for prioritization.
• flex
Table 7: Cost parameters for Prioritizing Test Cases for executing the test suite) should contain distinct items
(test cases) each with its own value (percentage of code
Cost Variable Explanation
coverage of each test case) and weight (execution time of
each test case).Zhang et al. (2009a) used ILP (Integer Lin-
Ca(T ) Cost of Analysis ear Programming) Techniques for prioritizing test cases in
Cm(T ) Cost of Maintenance time-constrained environment . This is the first attempt
Ce(T ) Cost of Execution to apply ILP for time-aware TCP. The only drawback of
ILP based approach is that as it requires more analysis
Cs(T ) Cost of Selection
time it appears time consuming for large test suite. Do
Cc(T ) Cost of Result Checking and Mirarab (2010) cited an example of a software devel-
Cf (F (T ) \ F (T 0 )) Cost of missing faults by opment organization which has a regression test suite of
not selecting T\T’ where T’ 30,000 test cases.It takes 1000 machine hours to run all
is the Selected Test Suite
(set difference - set of elements the test cases. Apart from execution time, significant time
present in F(T) but not in F(T’)) is needed for setting up test bed, monitoring results etc.
Cf (Fk (T ) \ Fk (T 0 )) Cost of omitting faults where Fk (x) is The authors raised an important fact that if no time con-
the set of regression faults on version straint is placed, prioritization becomes non cost effective.
vk detected by Test Suite x An economic model EVOMO( EVOlution-aware economic
MOdel) was used for analyzing several cost factors. Ef-
fect of individual test case cost was explored by You et al.
batch techniques are also severely time pressed. Extreme (2011) . Five time budgets (5%, 25%, 50%, 75% and 100%
programming techniques require frequent and fast execu- of execution time of entire test suite) were set and the au-
tion of test cases. We would like to narrate 5 publications thors concluded that although it is slightly better to track
which explored time aware prioritization with an applica- individual test case cost, the benefit is marginal.Table 9
tion of Genetic Algorithms or ILP (Integer Linear Pro- represents a concise snapshot of the above papers.
gramming) or Knapsack solvers.
Time aware TCP techniques were first introduced by 5.5. Requirement and Risk oriented Test Case Prioritiza-
Walcott et al. (2006). This is the first study which con- tion Techniques
cretely incorporates testing time budget . GAPrioritize is Mogyorodi (2001) of Starbase Corporation, overviewed
the proposed Genetic Algorithm which identifies the tu- Requirement Based Testing(RBT) process with Caliber-
ple with maximum fitness.The drawback of this study is RBT which can design minimum number of test cases from
that it considers each test case to be independent and have requirements. The author has stated that a problem was
no execution ordering dependency. Alspaugh et al. (2007) presented with 137,438,953,472 test cases and CaliberRBT
showed the usage of 0/1 knapsack solvers to finish the test- solved the problem with only 22 test cases. The cost to fix
ing activity within a stipulated amount of time . A knap- an error is lowest if it is found in the requirements phase.
sack with maximum fixed capacity (maximum time limit The distribution of bug report indicates that 56% of all
10
Table 8: Summary of Publications on Cost Cognizant Prioritization Techniques
(Malishevsky et al., 2002) Cost was subdivided into sev- Equations for calculating cost bash (size > 50 KLOC)
eral sub parameters. for selection, reduction, and
prioritization mechanism was
built.
(Elbaum et al., 2004) The concept of cost-benefit The preferred technique 8 C programs (bash,emp-
threshold was introduced. changes as the cost benefit server,sed, xearth, grep, flex,
threshold increases. make, gzip)
(Malishevsky et al., 2006) The authors introduced In 74.6% of cases, AP F Dc Empire(a game played be-
AP F Dc which considers based method showed a worse tween several players)
varying fault severity and rate of fault detection.
test case cost.
(Smith and Kapfhammer, The concept of cumulative Cost has effect on building 8 real world case study appli-
2009) coverage function was intro- smaller and faster fault detec- cations were studied.
duced. tion worthy test suite.
(Srikanth et al., 2009) Cost of configuration and set History of escaped faults help Two releases of a large legacy
up time plays an important in prioritizing system config- system were studied.
role. urations.
(Walcott et al., 2006) The first study which in- GA based algorithms per- Two case study applications
cludes testing time budget. formed extremely well (up to JDepend and Gradebook
120% improvement).
(Alspaugh et al., 2007) Knapsack solvers were used Testing activity can be fin- JDepend and Gradebook
ished within a stipulated
amount of time.
(Zhang et al., 2009a) ILP was used to prioritize All the ILP based techniques JDepend and JTopas
test cases. outperformed GA based tech-
niques.
(Do and Mirarab, 2010) EVOMO (EVOlution-aware Test case prioritization be- Java programs - ant, xml-
economic MOdel) was pro- comes non cost effective if security, jmeter, nanoxml,
posed. there is no time constraint. and galileo
(You et al., 2011) Effect of individual test case Benefit of individual test case Siemens and Space program
cost was explored. cost tracking is marginal.
11
bugs are rooted in the requirements phase while design and originated from each requirements and then prioritizes test
coding phase yields 27% and 7% respectively(Mogyorodi, cases based on that. Table 11 represents a concise snap-
2001). RBT helps in conducting testing in parallel with shot of the above papers.
development. In this case, testing is not a bottleneck. A
study conducted by Uusitalo et al. (2008) showed that test- 5.6. Model based Test Case Prioritization techniques
ing and requirements engineering are strongly linked. In Gathering execution information is a costly effort in
this subsection, we would like to focus on 7 key publica- terms of time, money and resource. Also as source code
tions which analyzed requirement based TCP techniques. changes with respect to new requirements , execution in-
The benefits of requirement based test case prioriti- formation needs timely upgrades making maintenance dif-
zation was explored since 2005 through several studies.If ficult. TCP techniques based on models gain interest in
all requirements are given equal importance then a value such cases. Model based TCP is basically a grey box
neutral approach gets created . To overcome this neu- oriented approach where specification models ( state dia-
trality, the authors designed PORT (Prioritization of Re- grams/activity diagrams etc) are utilized to represent ex-
quirements for Testing) which is value driven and sys- pected behaviour of the system. Each test case is linked to
tem level prioritization scheme (Srikanth and Williams, an execution path in the model. The term grey-box is used
2005).PORT considers four factors - CP(Customer As- as the information about the internal data structure and
signed Priority), IC(Requirement Implementation Com- architecture of the system is required but the source code
plexity), RV(Requirement Volatility) and FP(Fault Prone- is not required. As execution of model is fast compared to
ness) of requirements(Fig.4). CP is a measure of signifi- actual system , model based TCP is a profitable option.
cance of requirement from customer’s business value point Model based testing (MBT) is also gaining interest as an
of view and RV indicates how many times the require- test automation approach. In this subsection, we would
ment has changed. For industrial projects RV is high like to highlight 5 noteworthy publications which focused
while for stable projects RV is low. Implementation Com- on model based test case prioritization.
plexity ranges from 1 to 10 while larger value indicates Korel et al. (2005) focused on MBT(Model based test-
higher complexity. FP includes both number of field fail- ing) for system models. System models help in under-
ures and developmental failures while coding a require- standing system’s behaviour. State based models are ex-
ment. Based on these 4 values, PFV(Prioritization Factor ecuted with a test suite and the execution information is
Value) is computed for each requirement and it is used used to prioritize tests. The authors use EFSM(Extended
to derive Weighted Priority(WP) of each associated test Finite State Machine) as the modelling language which
cases. consists of states and transitions between states.Three sys-
Zhang et al. (2007) presented a metric ’units-of-testing- tem models (ATM model, Cruise Control model and Fuel
requirement-priority-satisfied-per-unit-test-case-cost’. As Pump model) averaging 7 to 13 states and 20 to 28 tran-
testing requirement priority changes frequently and test sitions were experimented and results indicated promising
case costs also vary this metric becomes necessary .An- improvement in prioritization. Faults were seeded in the
other study conducted by Krishnamoorthi and Sahaaya models. A very important conclusion came from the study
Arul Mary (2009) proposed two more factors, Complete- that monitoring only modified transitions is not that effec-
ness and Traceability, as regression test case factors. The tive, deleted transitions should also be taken care of. Five
theory of generation of test cases from requirements with heuristics were formulated by Korel et al. (2007) for model
the help of GSE (Genetic Software Engineering) was pro- based testing. A model based TCP technique taking into
jected by Salem and Hassan (2011). GSE uses a visual account several object oriented features like inheritance,
semi-formal notation called BT(Behaviour Tree) to model polymorphism, aggregation etc. was proposed by Pani-
the requirements. Arafeen and Do (2013) evaluated whether grahi and Mall (2010). The authors proposed EOSDG
test cases can be clustered based on similarities found (Extended Object Oriented System Dependence Graph).
in requirement. The requirement clusters are prioritized Fig.5 indicates different steps of this technique.
and more test cases are selected from higher priority clus- A similarity based test case selection approach was pro-
ters.Table 10 represents a concise snapshot of the above posed by Hemmati et al. (2013).In similarity based selec-
papers. tion it is hypothesized that more diverse test cases have
Next, we would like to discuss some risk management higher fault revealing ability.Table 12 represents a concise
based prioritization strategies. Evaluating risk helps in snapshot of the above papers.
early damage prevention. Stallbaum et al. (2008) pro-
posed an automatic technique RiteDAP(Risk Based Test 5.7. Test Case Prioritization techniques for GUI/Web Based
Case Derivation And Prioritization) which generates test Applications
cases from Activity Diagrams and then prioritizes them
In the previous subsections, we have portrayed several
based on associated risk. Srivastava (2008) mentioned
strategies for Test Case Prioritization for C and JAVA pro-
that risk prone software elements should be tested ear-
grams. However, increasing usage of internet is making re-
lier. A new technique was structured by Yoon (2012) that
liable web applications on demand. Web applications run
measures risk exposure values of different risk items which
12
Table 10: Summary of Publications on Requirement Aware Prioritization Techniques
(Srikanth et al., The authors designed PORT 80% of defects got revealed in JAVA projects and an indus-
2005),(Srikanth and (Prioritization of Require- first 3 weeks. trial case study from IBM
Williams, 2005), (Srikanth ments for Testing) .
et al., 2013)
(Zhang et al., 2007) Changing testing require- New metric ’units-of- A series of simulation experi-
ment priority and varying testing-requirement-priority- ments were performed.
test case costs were consid- satisfied-per-unit-test-case-
ered. cost’ was built.
(Krishnamoorthi and Sa- A new system level TCP Two more factors, Complete- 5 J2EE application projects
haaya Arul Mary, 2009) technique similar to PORT ness and Traceability, were of size approximately 6000
was designed. proposed. LOC
(Salem and Hassan, 2011) Test cases were generated Genetic Software Engineering Microwave oven case study
from requirements with the was used to model the re-
help of GSE. quirements.
(Arafeen and Do, 2013) Test cases were clustered More test cases are selected Capstone (online examina-
based on similarities found in from higher priority clusters. tion) and iTrust (medical
requirement. record keeper)
(Stallbaum et al., 2008) RiteDAP(Risk Based Test APDP(Average Percentage Program flow chart of the in-
Case Derivation And Priori- of Damage Prevented) was come tax calculation
tization). designed.
(Srivastva et al., 2008) Both requirement priority Risk prone software elements Sample case study to count
and risk exposure value were should be tested earlier. the frequency of a word in a
considered. file
(Yoon, 2012) RE(Risk Exposure) based RE(Risk Exposure) based All Siemens program
prioritization prioritization yielded 94%
APFD.
(Korel et al., 2005) Selective prioritization and Modified and Deleted transi- Three system models (ATM
model dependence based pri- tions should be taken care of. model, Cruise Control model
oritization were proposed. and Fuel Pump model)
(Korel et al., 2007), (Korel Several heuristics were for- Transition frequency or num- Two systems - ISDN and
et al., 2008) mulated for MBT. ber may not have positive in- TCP-Dialler.
fluence in early fault detec-
tion.
(Panigrahi and Mall, 2010) , Object oriented features were Almost 30% improvement in ATM, Library System, Ele-
(Panigrahi and Mall, 2014) taken into account. bug detection vator Controller and Vending
Machine
(Hemmati et al., 2013) A similarity based test case Test case diversity increases Subsystem of a video confer-
selection approach was fol- scalability of MBT. ence system, subsystem of a
lowed. safety critical system
13
Figure 4: Prioritization of Requirements for Testing(PORT)
on web server and consists of several static (same content system may be - {Member Type= Student, Discount Sta-
for all users) and dynamic (content depends on user in- tus= High}. Sampath et al. (2008) proposed several prior-
put) web pages. While testing web applications, sequence itization strategies (frequency of appearance, coverage of
of events (clicking a button, opening a menu etc.) which parameter values etc.) after evaluating them with three
are performed by users are recorded. Web applications web applications . Table 13 shows the prioritization order
and Graphical User Interface are examples of EDS (Event (T3-T2-T1-T4) based on length of interaction . T3 covers
Driven Software).Large number of possible combination four interactions while T4 covers only one.
of events creates huge number of test cases for EDS and All the previous work considered GUI and web based
poses a challenge. Web application testing has the ad- application as separate object of study. Bryce et al. (2011)
vantage that user session data can be recorded and used proposed methods for testing web applications and GUI
as test data. In this subsection, we narrated 5 research applications together.Table 14 represents a concise snap-
publications which explored prioritization of Web applica- shot of the above papers.
tion/GUI application test cases.
Memon and Xie (2005) proposed a framework called 5.8. TCP techniques applied on real world systems
DART(Daily Automated Regression Tester) that retests While most of the conducted studies in the TCP do-
GUI applications frequently . An office suite TerpOffice main are based on seeded faults so far, it is necessary to
was developped by undergraduate students of University study the effect of TCP on real faults. Seeded faults are
of Maryland and was used as a subject for testing DART. easier to inject and may be available in large numbers while
TerpOffice has TerpWord, TerpSpreadSheet, TerpPaint, real regression faults might be handful and tough to lo-
TerpCalc and TerpPresent. Other than TerpSpreadSheet, cate. Elbaum et al. (2002) studied real time embedded
all the subjects showed large number of fault detection system QTB. However additional studies need to be done
with DART. The authors suggested a future work which for checking the applicability of TCP techniques in real
will make DART execute certain uncovered part of the domain. We have come across 4 more remarkable work
code. User session based testing of web applications was which studied real time systems with real faults.
introduced by Sprenkle et al. (2005). Converting usage Srivastava and Thiagarajan (2002) proved that TCP
data into test cases is called user session based testing. works for large systems (1.8 million LOC) by building Ech-
The authors proposed an approach called Concept which elon. Haymar and Hla (2008) used PSO(Particle Swarm
clusters user sessions that depicts similar use cases. Bryce Optimization) technique which prioritizes test cases based
and Memon (2007) proposed TCP by interaction cover- on their new best position in the test suite. The authors
age. An example of a pair wise interaction for a library applied PSO technique for real time embedded system and
14
Table 13: Test Case Interaction with Web pages
T1 X X
T2 X X X
T3 X X X X
T4 X
(Memon and Xie, 2005) DART(Daily Automated Re- Other than TerpSpreadSheet, An office suite TerpOffice
gression Tester) framework all subjects showed large
was built. number of fault.
(Sprenkle et al., 2005) User session based testing of Concept analysis based re- Application Bookstore and
web applications was intro- duction has greater fault de- Course Project Manager
duced. tection rate.
(Bryce and Memon, 2007) Test case prioritization by in- 2-way prioritization showed Four GUI applications from
teraction coverage was intro- best APFD TerpOffice
duced.
(Sampath et al., 2008) Several strategies (frequency Found most of the faults at Web based Applications
of appearance, coverage of first 10% of tests executed. (Book, CPM, MASPLAS)
parameter) were followed.
(Bryce et al., 2011) Unified model for testing web 2-way based prioritization 4 GUI projects (TerpOf-
applications and GUI appli- showed promising results for fice) and 3 Web Applications
cations together was built. all cases. (Book, CPM, MASPLAS)
15
showed that 64% coverage can be achieved after running of WSDL (Web Service Description Language) document
10 test cases.Nardo et al. (2015) conducted another case were also used to order the test suite (Mei et al., 2009).
study with an industrial system (name withheld and men- Prioritizing test cases without coverage information is
tioned as NoiseGen) with 37 real regression faults. The becoming popular. Mei et al. introduced JUPTA (JUnit
study showed that modification information does not help Test Case Prioritization Techniques operating in the Ab-
in increasing fault detection rate. In 2016, a very inter- sence of coverage information) (Mei et al., 2012). In an-
esting study conducted by Lu et al. (2016) reflected a new other study, JUnit test cases were prioritized without using
thought process regarding prioritization approach . Very coverage information (Zhang et al., 2009b). Static black
few studies talk about test suite change. The authors con- box oriented TCP (Thomas et al., 2014) , string distance
cluded that expansion in test suite level did degrade the based TCP (Ledru et al., 2012) do not require code or spec-
efficiency of TCP techniques. Table 15 represents a concise ification detail and test suites are prioritized without the
snapshot of the above papers. execution of source code or specification models. Aggarwal
et al. formulated multiple parameter based prioritization
5.9. Other Test Case Prioritization Approaches model which actually originates from SRS (Software Re-
As we have detailed our representative publications quirement Specification)(Aggarwal et al., 2005).
from each avenue of test case prioritization techniques in In 2014, Rothermel et al. formulated unified strategy
the previous subsections, we would like to highlight some that combined total and additional (not yet covered) tech-
other distinct prioritization approaches. Search algorithm niques together (Hao et al., 2014). Fang et al. designed
based prioritization, ant colony optimization based priori- similarity based test case prioritization technique where
tization, clustering oriented prioritization, multi-objective the execution profiles of test cases were utilized. Test case
prioritization, and prioritizations without coverage infor- diversity yields better performance (Fang et al., 2014). In
mation are gaining interest. 2015, Epitropakis et al. clubbed three objectives average
Data flow information based testing (Rummel et al., percentage of coverage, average percentage of coverage of
2005) , user knowledge based ranking of test cases (Tonella changed code and average percentage of past fault cov-
et al., 2006), dynamic runtime behaviour based test case ered (Epitropakis et al., 2015). Marchetto et al. proposed
clustering (Yoo et al., 2009) are revealing more defects multi objective prioritization technique (Marchetto et al.,
than control flow based criteria. Test cases are also ranked 2016). Eghbali et al. developed a lexicographical ordering
based on classified events (Belli et al., 2007) and cluster- technique for breaking ties. Vector x is considered to have
ing approach (Chen et al., 2018) . Siavash Mirarab and higher lexicographical rank than Vector y if the first un-
Ladan Tahvildari built Bayesian Networks (BN) based pri- equal element of x and y has a greater value of x (Eghbali
oritization approach (Mirarab and Tahvildari, 2007). The and Tahvildari, 2016) .The method proved to be beneficial
prospect of CIT (Combinatorial Interaction Testing) ( Qu when more than one test case achieves the same coverage.
et al. 2007 ; Qu et al. 2008) and construction of call tree We have described the common prioritization mecha-
based prioritization approach (Smith et al., 2007) look very nisms (coverage/time/cost/history/model etc.) with their
promising as prioritized test suite takes 82% less time to methodology, subject programs, results and shortcomings
execute . P R Srivastava applied Non homogeneous Pois- in the previous subsections. However, Test Case Prioritiza-
son process for optimizing testing time window (Srivas- tion is dynamically evolving and that is, why we have sum-
tava, 2008). marized these other upcoming prioritization viewpoints.
Z. Li et al. applied meta heuristic and evolutionary
algorithms (Li et al., 2007) for TCP. S. Li et al. con-
6. Conclusion and Future Work
ducted simulation studies on Search Algorithms for test
case prioritization. Five search algorithms (Total Greedy, As Test Driven Developments (TDD) are generating
Additional Greedy, 2- Optimal Greedy, Hill Climbing and great profits, testing is not considered as an overhead ac-
Genetic Algorithms) were studied (Li et al., 2010). Conrad tivity anymore. The test case prioritization techniques ac-
et al. built a new framework called GELATIONS (GEnetic tually help in orderly execution of test cases based on some
aLgorithm bAsed Test suIte priOritizatioN System) (Con- performance or goal function (coverage/time/cost etc.).
rad et al., 2010). A study to compare the effectiveness We have read all the chosen 90 papers in full length before
of search based and greedy prioritizers were conducted preparing this review. We would like to conclude:
(Williams and Kapfhammer, 2010). ACO (Ant Colony
Optimization) technique can also be used for prioritizing 1. Even though APFD(Average Percentage of Faults
test cases (Singh et al. 2010 ; Agrawal and Kaur 2018). Detected) is the most used metric for prioritization,
Dennis Jeffrey and Neelam Gupta prioritized test cases Savings Factor, Coverage Effectiveness , AP F DC
based on coverage of requirements in the relevant slices of (APFD per cost), NAPFD (Normalized APFD) etc.
outputs of each test case. The study showed that if a test are also considered as valuable metrics .
case is traversing a modification, it will not necessarily 2. The SIR(Software Artifact Infrastructure Repository)
expose a fault (Jeffrey and Gupta, 2008) . ART (Adap- library hosts several recurrently used subject pro-
tive Random Testing) (Jiang et al., 2009) and xml tags grams (tcas, schedule, replace etc.) for TCP stud-
16
Table 15: Summary of Publications on Prioritization Techniques applied for Real World System
(Srivastava and Thiagarajan, The weight of a test is equal TCP works for large systems Two versions of a production
2002) to the number of impacted (1.8 million LOC). program
blocks it covers.
(Haymar and Hla, 2008) PSO(Particle Swarm Opti- 64 % coverage can be Real time embedded system
mization) technique was pro- achieved after running 10
posed. test cases.
(Nardo et al., 2015) The behaviour of real regres- Modification information NoiseGen of 59-73 KLOC
sion faults were studied. does not help.
(Lu et al., 2016) The study talks about test Expansion in test suite level 8 real world Java projects
suite augmentation. degrade the efficiency.
ies. UNIX utility programs(grep, flex), JAVA pro- binatorial testing helps in reducing the number of
grams(ant, jmeter) and several other case study ap- test cases, it is mostly limited to small values of
plications are also selected in repeated endeavour. t. Henard et al. (2014) proposed a similarity based
3. Coverage aware prioritization methods are dominant approach which can be an alternative to t-wise ap-
while requirement based methods are second best proach. In context of this, realistic approaches for
one. Model based TCP and search algorithm based large SPLs should be explored more.
TCP is also finding more attention nowadays. 6. Bertolino et al. (2015) opened up a new angle ad-
dressing TCP with access control.They indicated sim-
We would also like to recommend several scope for fu-
ilarity criteria is useful for prioritization of access
ture work -
control tests. To the best of our knowledge, we do
1. Many researchers have addressed the effect of source not see any other study exploring this view. So fu-
code modification on test case prioritization, how- ture work might investigate access control test case
ever the study regarding the alteration(augmentation, prioritization in depth.
deletion etc.) in test suite level needs more explo-
ration. Prioritization techniques for ordering multi- We hope this article will be beneficial for both begin-
ple test suites can also be designed. ners and experienced professionals who have far-reaching
2. Even though TCP techniques without coverage in- interest in TCP domain. We can confide this is the first
formation has shown promising results, it is still not review with a thorough report of the last 18 years of TCP
that popular. Acquiring coverage information is te- techniques.
dious and costly in many cases. In this context, tech-
niques without coverage information (Petke et al. References
2015;Parejo et al. 2016) should be researched more.
Aggarwal, K.K., Singh, Y., Kaur, A., 2005. A multiple parameter
3. Luo et al. (2016) indicated a combination of static
test case prioritization model. Journal of Statistics and Manage-
techniques (includes source code intervention) and ment Systems 8, 369–386.
dynamic techniques (uses run time execution data) Agrawal, A.P., Kaur, A., 2018. A Comprehensive Comparison of
might be more beneficial for prioritizing test cases. Ant Colony and Hybrid Particle Swarm Optimization Algorithms
Through Test Case Selection , 397–405.
However it is an open end which combination works Alspaugh, S., Walcott, K.R., Belanich, M., Kapfhammer, G.M.,
best. Among the static techniques call graph based Soffa, M.L., 2007. Efficient time-aware prioritization with knap-
method, topic model based method are popular. Total- sack solvers. Proc. - 1st ACM Int. Workshop on Empirical As-
additional strategy, search based approach, adaptive sessment of Software Engineering Languages and Technologies,
WEASELTech 2007, Held with the 22nd IEEE/ACM Int. Conf.
random testing (ART) are common dynamic tech- Automated Software Eng., ASE 2007 , 13–18.
niques. Researchers might delve into this fact that Arafeen, M.J., Do, H., 2013. Test case prioritization using
which combination of these above mentioned tech- requirements-based clustering. Proceedings - IEEE 6th Interna-
niques work best. tional Conference on Software Testing, Verification and Valida-
tion, ICST 2013 , 312–321.
4. Researchers have also indicated TCP based on pro- Belli, F., Eminov, M., Gökçe, N., 2007. A Fuzzy Clustering Approach
gram change information outperforms static or dy- and Case Study 1 Introduction : Motivation and Related Work.
namic techniques (Saha et al., 2015).This scenario Dependable Computing , 95–110.
Bertolino, A., Daoudagh, S., El Kateb, D., Henard, C., Le Traon,
can open an entire new avenue which will be benefi-
Y., Lonetti, F., Marchetti, E., Mouelhi, T., Papadakis, M., 2015.
cial for large regression test suite. Similarity testing for access control. Information and Software
5. Further exploration is needed for large software prod- Technology 58, 355–372.
uct line(SPL) related TCP. Even though t-wise com-
17
Bryce, R.C., Memon, A.M., 2007. Test suite prioritization by in- Be Optimal or Not in Test-Case Prioritization. IEEE Transactions
teraction coverage. Workshop on Domain specific approaches to on Software Engineering 42, 490–504.
software test automation in conjunction with the 6th ESEC/FSE Hao, D., Zhang, L., Zhang, L., Rothermel, G., Mei, H., 2014. A
joint meeting - DOSTA ’07 , 1–7. Unified Test Case Prioritization Approach. ACM Trans. Softw.
Bryce, R.C., Sampath, S., Memon, A.M., 2011. Developing a single Eng. Methodol. 24, 10:1—-10:31.
model and test prioritization strategies for event-driven software. Haymar, K., Hla, S., 2008. Applying Particle Swarm Optimization to
IEEE Transactions on Software Engineering 37, 48–64. Prioritizing Test Cases for Embedded Real Time Software Retest-
Catal, C., Mishra, D., 2013. Test case prioritization: A systematic ing YoungSik Choi 3 . Particle Swarm Optimization Technique
mapping study. Software Quality Journal 21, 445–478. in Test Case Prioritization This study applies the particle swarm
Chen, J., Zhu, L., Yueh, T., Towey, D., Kuo, F.c., Huang, R., 2018. optimization , 527–532.
The Journal of Systems and Software Test case prioritization for Hemmati, H., Arcuri, A., Briand, L., 2013. Achieving scalable model-
object-oriented software : An adaptive random sequence approach based testing through test case diversity. ACM Transactions on
based on clustering R 135, 107–125. Software Engineering and Methodology 22, 1–42.
Chittimalli, P.K., Harrold, M.J., 2009. Recomputing coverage infor- Henard, C., Papadakis, M., Harman, M., Jia, Y., Traon, Y.L., 2016.
mation to assist regression testing. IEEE Transactions on Software Comparing White-Box and Black-Box Test Prioritization. 2016
Engineering 35, 452–469. IEEE/ACM 38th International Conference on Software Engineer-
Conrad, A.P., Roos, R.S., Kapfhammer, G.M., 2010. Empirically ing (ICSE) , 523–534.
studying the role of selection operators duringsearch-based test Henard, C., Papadakis, M., Perrouin, G., Klein, J., Heymans, P.,
suite prioritization. Proceedings of the 12th annual conference on Traon, Y.L., 2014. Bypassing the combinatorial explosion: Using
Genetic and evolutionary computation - GECCO ’10 , 1373. similarity to generate and prioritize t-wise test configurations for
Do, H., Mirarab, S., 2010. The Effects of Time Constraints on Test software product lines. IEEE Transactions on Software Engineer-
Case Prioritization : A Series of Controlled Experiments 36, 593– ing 40, 650–670. arXiv:1211.5451v1.
617. Huang, Y.C., Peng, K.L., Huang, C.Y., 2012. A history-based cost-
Do, H., Rothermel, G., 2006. On the use of mutation faults in em- cognizant test case prioritization technique in regression testing.
pirical assessments of test case prioritization techniques. IEEE Journal of Systems and Software 85, 626–637.
Transactions on Software Engineering 32, 733–752. Jeffrey, D., Gupta, N., 2008. Experiments with test case prioritiza-
Do, H., Rothermel, G., Kinneer, A., 2006. Prioritizing JUnit test tion using relevant slices. Journal of Systems and Software 81,
cases: An empirical assessment and cost-benefits analysis. Empir- 196–221.
ical Software Engineering 11, 33–70. Jiang, B., Zhang, Z., Chan, W.K., Tse, T.H., 2009. Adaptive random
Eghbali, S., Tahvildari, L., 2016. Test Case Prioritization Using test case prioritization. ASE2009 - 24th IEEE/ACM International
Lexicographical Ordering. IEEE Transactions on Software Engi- Conference on Automated Software Engineering , 233–244.
neering 42, 1178–1195. Jones, J.A., Harrold, M.J., 2001. Test-suite reduction and prioritiza-
Elbaum, S., Kallakuri, P., Malishevsky, A., Rothermel, G., Kan- tion for modified condition/decision coverage. IEEE International
duri, S., 2003. Understanding the effects of changes on the cost- Conference on Software Maintenance, ICSM 29, 92–103.
effectiveness of regression testing techniques. Software Testing Juristo, N., Moreno, A.M., Vegas, S., 2004. Reviewing 25 Years of
Verification and Reliability 13, 65–83. Testing Technique Experiments. Empirical Software Engineering
Elbaum, S., Malishevsky, A., Rothermel, G., 2002. Test case pri- 9, 7–44. arXiv:1112.2903v1.
oritization: a family of empirical studies. IEEE Transactions on Kapfhammer, G.M., Soffa, M.L., 2007. Using coverage effectiveness
Software Engineering 28, 159–182. to evaluate test suite prioritizations. Proceedings of the 1st ACM
Elbaum, S., Rothermel, G., Kanduri, S., 2004. Selecting a Cost- international workshop on Empirical assessment of software engi-
Effective Test Case Prioritization , 185–210. neering languages and technologies held in conjunction with the
Elbaum, S., Rothermel, G., Penix, J., 2014. Techniques for improv- 22nd IEEE/ACM International Conference on Automated Soft-
ing regression testing in continuous integration development envi- ware Engineering (ASE) 2007 - WEASELTech ’07 , 19–20.
ronments. Proceedings of the 22nd ACM SIGSOFT International Kim, J.M., Porter, A., 2002. A history-based test prioritization tech-
Symposium on Foundations of Software Engineering - FSE 2014 , nique for regression testing in resource constrained environments.
235–245. Proceedings of the 24th international conference on Software en-
Epitropakis, M.G., Yoo, S., Harman, M., Burke, E.K., 2015. Em- gineering - ICSE ’02 , 119.
pirical evaluation of pareto efficient multi-objective regression test Kim, S., Baik, J., 2010. An effective fault aware test case prioritiza-
case prioritisation. Proceedings of the 2015 International Sympo- tion by incorporating a fault localization technique. Proceedings
sium on Software Testing and Analysis - ISSTA 2015 , 234–245. of the 2010 ACM-IEEE International Symposium on Empirical
Erdogmus, H., Morisio, M., Torchiano, M., 2005. On the effectiveness Software Engineering and Measurement - ESEM ’10 , 1.
of the test-first approach to programming. IEEE Transactions on Korel, B., Koutsogiannakis, G., Tahat, L.H., 2007. Model-based test
Software Engineering 31, 226–237. prioritization heuristic methods and their evaluation. Proceedings
Fang, C., Chen, Z., Wu, K., Zhao, Z., 2014. Similarity-based test of the 3rd international workshop on Advances in model-based
case prioritization using ordered sequences of program entities. testing - A-MOST ’07 , 34–43.
Software Quality Journal 22, 335–361. Korel, B., Koutsogiannakis, G., Tahat, L.H., 2008. Application of
Fang, C.R., Chen, Z.Y., Xu, B.W., 2012. Comparing logic cover- system models in regression test suite prioritization. 2008 IEEE
age criteria on test case prioritization. Science China Information International Conference on Software Maintenance , 247–256.
Sciences 55, 2826–2840. Korel, B., Tahat, L.H., Harman, M., 2005. Test prioritization us-
Fazlalizadeh, Y., Khalilian, A., Abdollahi Azgomi, M., Parsa, S., ing system models. IEEE International Conference on Software
2009. Prioritizing test cases for resource constraint environments Maintenance, ICSM 2005, 559–568.
using historical test case performance data. Proceedings - 2009 Krishnamoorthi, R., Sahaaya Arul Mary, S.A., 2009. Factor oriented
2nd IEEE International Conference on Computer Science and In- requirement coverage based system test case prioritization of new
formation Technology, ICCSIT 2009 , 190–195. and regression test cases. Information and Software Technology
Feldt, R., Poulding, S., Clark, D., Yoo, S., 2016. Test Set Diameter: 51, 799–808.
Quantifying the Diversity of Sets of Test Cases. Proceedings - 2016 Ledru, Y., Petrenko, A., Boroday, S., Mandran, N., 2012. Prioritizing
IEEE International Conference on Software Testing, Verification test cases with string distances. Automated Software Engineering
and Validation, ICST 2016 , 223–2331506.03482. 19, 65–95.
Hao, D., Zhang, L., Mei, H., 2016a. Test-case prioritization : achieve- Leon, D., Podgurski, A., 2003. A comparison of coverage-based
ments and challenges 10, 769–777. and distribution-based techniques for filtering and prioritizing test
Hao, D., Zhang, L., Zang, L., Wang, Y., Wu, X., Xie, T., 2016b. To cases. Proceedings - International Symposium on Software Relia-
18
bility Engineering, ISSRE 2003-Janua, 442–453. regression testing: An Empirical Study of Sampling and Prior-
Li, S., Bian, N., Chen, Z., You, D., He, Y., 2010. A Simulation Study itization. Proceedings of the 2008 international symposium on
on Some Search Algorithms for Regression Test Case Prioritiza- Software testing and analysis - ISSTA ’08 , 75.
tion. 2010 10th International Conference on Quality Software , Qu, X., Cohen, M.B., Woolf, K.M., 2007. Combinatorial interaction
72–81. regression testing: A study of test case generation and prioriti-
Li, Z., Harman, M., Hierons, R.M., 2007. Search algorithms for zation. IEEE International Conference on Software Maintenance,
regression test case prioritization. IEEE Transactions on Software ICSM , 255–264.
Engineering 33, 225–237. Rothermel, G., Harrold, M., 1996. Analyzing regression test selection
Lu, Y., Lou, Y., Cheng, S., Zhang, L., Hao, D., Zhou, Y., Zhang, techniques. IEEE Transactions on Software Engineering 22, 529–
L., 2016. How does regression test prioritization perform in real- 551.
world software evolution? Proceedings of the 38th International Rothermel, G., Harrold, M., Ostrin, J., Hong, C., . An empirical
Conference on Software Engineering - ICSE ’16 , 535–546. study of the effects of minimization on the fault detection capa-
Luo, Q., Moran, K., Poshyvanyk, D., 2016. A Large-Scale Empir- bilities of test suites. Proceedings. International Conference on
ical Comparison of Static and Dynamic Test Case Prioritization Software Maintenance (Cat. No. 98CB36272) , 34–43.
Techniques. Proceedings of the 2016 24th ACM SIGSOFT In- Rothermel, G., Untcn, R.H., Chu, C., Harrold, M.J., 2001. Pri-
ternational Symposium on Foundations of Software Engineering , oritizing test cases for regression testing. IEEE Transactions on
559–570. Software Engineering 27, 929–948.
Malishevsky, A.G., Rothermel, G., Elbaum, S., 2002. Modeling the Rummel, M.J., Kapfhammer, G.M., Thall, A., 2005. Towards the
cost-benefits tradeoffs for regression testing techniques. Software prioritization of regression test suites with data flow information.
Maintenance, 2002. Proceedings. International Conference on , Proceedings of the 2005 ACM symposium on Applied computing
204–213. - SAC ’05 , 1499.
Malishevsky, A.G., Ruthruff, J.R., Rothermel, G., Elbaum, S., 2006. Saff, D., Ernst, M.D., 2003. Reducing wasted development time via
Cost-cognizant Test Case Prioritization. Department of Computer continuous testing. Proceedings - International Symposium on
Science and Engineering University of NebraskaLincoln Techical Software Reliability Engineering, ISSRE 2003-Janua, 281–292.
Report , 1–41. Saha, R.K., Zhang, L., Khurshid, S., Perry, D.E., 2015. An infor-
Marchetto, A., Islam, M.M., Asghar, W., Susi, A., Scanniello, G., mation retrieval approach for regression test prioritization based
2016. A Multi-Objective Technique to Prioritize Test Cases. IEEE on program changes. Proceedings - International Conference on
Transactions on Software Engineering 42, 918–940. Software Engineering 1, 268–279.
Mei, H., Hao, D., Zhang, L., Zhang, L., Zhou, J., Rothermel, G., Salem, Y.I., Hassan, R., 2011. Requirement-based test case gen-
2012. A static approach to prioritizing JUnit test cases. IEEE eration and prioritization. ICENCO’2010 - 2010 International
Transactions on Software Engineering 38, 1258–1275. Computer Engineering Conference: Expanding Information So-
Mei, L., Chan, W.K., Tse, T.H., Merkel, R.G., 2009. Tag-based ciety Frontiers , 152–157.
techniques for black-box test case prioritization for service testing. Sampath, S., Bryce, R.C., Viswanath, G., Kandimalla, V., Koru,
Proceedings - International Conference on Quality Software , 21– A.G., 2008. Prioritizing user-session-based test cases for web ap-
30. plications testing. Proceedings of the 1st International Confer-
Memon, A., Gao, Z., Nguyen, B., Dhanda, S., Nickell, E., Siem- ence on Software Testing, Verification and Validation, ICST 2008
borski, R., Micco, J., 2017. Taming google-scale continuous test- , 141–150.
ing. Proceedings - 2017 IEEE/ACM 39th International Conference Singh, Y., Kaur, A., Suri, B., 2010. Test case prioritization using
on Software Engineering: Software Engineering in Practice Track, ant colony optimization. ACM SIGSOFT Software Engineering
ICSE-SEIP 2017 , 233–242. Notes 35, 1.
Memon, A.M., Xie, Q., 2005. Studying the fault-detection effec- Smith, A., Geiger, J., Kapfhammer, G.M., Soffa, M.L., 2007. Test
tiveness of GUI test cases for rapidly evolving software. IEEE suite reduction and prioritization with call trees. Proceedings of
Transactions on Software Engineering 31, 884–896. the twenty-second IEEE/ACM international conference on Auto-
Mirarab, S., Tahvildari, L., 2007. A Prioritization Approach for mated software engineering - ASE ’07 , 539.
Software. Test , 276–290. Smith, A.M., Kapfhammer, G.M., 2009. An empirical study of in-
Mogyorodi, G., 2001. Requirements-Based Testing : An Overview corporating cost into test suite reduction and prioritization. Pro-
CaliberRBT / Mercury Interactive Integration Overview. Integra- ceedings of the 2009 ACM symposium on Applied Computing -
tion The Vlsi Journal , 286–295. SAC ’09 1, 461.
Nardo, D.D., Alshahwan, N., Briand, L., Labiche, Y., 2015. Sprenkle, S., Gibson, E., Pollock, L., Souter, a., 2005. An empiri-
Coverage-based regression test case selection, minimization and cal comparison of test suite reduction techniques for user-session-
prioritization: a case study on an industrial system. Software based testing of Web applications. 21st IEEE International Con-
Testing, Verification and Reliability 25, 371396. ference on Software Maintenance (ICSM’05) , 587–596.
Panigrahi, C.R., Mall, R., 2010. Model-based regression test case Srikanth, H., Banerjee, S., Williams, L., Osborne, J., 2013. Towards
prioritization. ACM SIGSOFT Software Engineering Notes 35, 1. the prioritization of system test cases. Software Testing, Verifica-
Panigrahi, C.R., Mall, R., 2014. A heuristic-based regression test tion and Reliability 24, 320337.
case prioritization approach for object-oriented programs. Inno- Srikanth, H., Cohen, M.B., Qu, X., 2009. Reducing field failures in
vations in Systems and Software Engineering 10, 155–163. system configurable software: Cost-based prioritization. Proceed-
Parejo, J.A., Sánchez, A.B., Segura, S., Ruiz-Cortés, A., Lopez- ings - International Symposium on Software Reliability Engineer-
Herrejon, R.E., Egyed, A., 2016. Multi-objective test case pri- ing, ISSRE , 61–70.
oritization in highly configurable systems: A case study. Journal Srikanth, H., Williams, L., 2005. On the economics of requirements-
of Systems and Software 122, 287–310. based test case prioritization. ACM SIGSOFT Software Engineer-
Park, H., Ryu, H., Baik, J., 2008. Historical value-based approach for ing Notes 30, 1.
cost-cognizant test case prioritization to improve the effectiveness Srikanth, H., Williams, L., Osborne, J., 2005. System test case pri-
of regression testing. Proceedings - The 2nd IEEE International oritization of new and regression test cases. 2005 International
Conference on Secure System Integration and Reliability Improve- Symposium on Empirical Software Engineering, ISESE 2005 00,
ment, SSIRI 2008 , 39–46. 64–73.
Petke, J., Cohen, M.B., Harman, M., Yoo, S., 2015. Practical Com- Srivastava, A., Thiagarajan, J., 2002. Effectively prioritizing tests in
binatorial Interaction Testing: Empirical Findings on Efficiency development environment. ACM SIGSOFT Software Engineering
and Early Fault Detection. IEEE Transactions on Software Engi- Notes 27, 97.
neering 41, 901–924. Srivastava, P.R., 2008. Model for optimizing software testing period
Qu, X., Cohen, M.B., Rothermel, G., 2008. Configuration-aware using non homogenous Poisson process based on cumulative test
19
case prioritization. IEEE Region 10 Annual International Confer-
ence, Proceedings/TENCON , 1–6.
Srivastva, P.R., Kumar, K., Raghurama, G., 2008. Test case priori-
tization based on requirements and risk factors. ACM SIGSOFT
Software Engineering Notes 33, 1.
Stallbaum, H., Metzger, A., Pohl, K., 2008. An automated tech-
nique for risk-based test case generation and prioritization. . . . on
Automation of software test , 67.
Thomas, S.W., Hemmati, H., Hassan, A.E., Blostein, D., 2014. Static
test case prioritization using topic models. volume 19.
Tonella, P., Avesani, P., Susi, A., 2006. Using the case-based rank-
ing methodology for test case prioritization. IEEE International
Conference on Software Maintenance, ICSM , 123–132.
Uusitalo, E.J., Komssi, M., Kauppinen, M., Davis, A.M., 2008. Link-
ing requirements and testing in practice. Proceedings of the 16th
IEEE International Requirements Engineering Conference, RE’08
, 265–270.
Walcott, K.R., Soffa, M.L., Kapfhammer, G.M., Roos, R.S., 2006.
TimeAware test suite prioritization. Proceedings of the 2006 inter-
national symposium on Software testing and analysis - ISSTA’06
, 1.
Williams, Z.D., Kapfhammer, G.M., 2010. Using Synthetic Test
Suites to Empirically Compare Search-based and Greedy Priori-
tizers. Proceedings of the 12th Annual Conference on Genetic and
Evolutionary Computation (GECCO ’10) , 2119–2120.
Wong, W.E., Morgan, J.R., London, S., Mathur, A.P., 1998. Effect
of test set minimization on fault detection effectiveness. Software
- Practice and Experience 28, 347–369.
Yoo, S., Harman, M., 2007. Regression Testing Minimisation, Selec-
tion and Prioritisation : A Survey. Test. Verif. Reliab 00, 1–7.
Yoo, S., Harman, M., Tonella, P., Susi, A., 2009. Clustering test
cases to achieve effective and scalable prioritisation incorporating
expert knowledge. Proc. ISSTA , 201–212.
Yoon, M., 2012. A Test Case Prioritization through Correlation
of Requirement and Risk. Journal of Software Engineering and
Applications 05, 823–836.
You, D., Chen, Z., Xu, B., Luo, B., Zhang, C., 2011. An empirical
study on the effectiveness of time-aware test case prioritization
techniques. Proceedings of the 2011 ACM Symposium on Applied
Computing , 1451–1456.
Zhang, L., Hou, S.S., Guo, C., Xie, T., Mei, H., 2009a. Time-aware
test-case prioritization using integer linear programming. Pro-
ceedings of the eighteenth international symposium on Software
testing and analysis - ISSTA ’09 , 401–419.
Zhang, L., Zhou, J., Hao, D., Zhang, L., Mei, H., 2009b. Jtop:
Managing JUnit test cases in absence of coverage information.
ASE2009 - 24th IEEE/ACM International Conference on Auto-
mated Software Engineering , 677–6797.
Zhang, X., Nie, C., Xu, B., Qu, B., 2007. Test case prioritization
based on varying testing requirement priorities and test case costs.
Proceedings - International Conference on Quality Software , 15–
24.
20