—Performance issues are often the cause of failures in
today’s large-scale software systems. These issues make performance
testing essential during software maintenance. However,
performance testing is faced with many challenges. One challenge
is determining how long a performance test must run. Although
performance tests often run for hours or days to uncover
performance issues (e.g., memory leaks), much of the data that is
generated during a performance test is repetitive. Performance
analysts can stop their performance tests (to reduce the time
to market and the costs of performance testing) if they know
that continuing the test will not provide any new information
about the system’s performance. To assist performance analysts
in deciding when to stop a performance test, we propose an
automated approach that measures how much of the data that is
generated during a performance test is repetitive. Our approach
then provides a recommendation to stop the test when the data
becomes highly repetitive and the repetitiveness has stabilized
(i.e., little new information about the systems’ performance is
generated).