Perf Auto
Perf Auto
This document presents ideas for what good performance tailored automation should be aiming for, what parts comprise it and how individuals or groups can use new developments in technology to enable sharing and reuse. Nothing herein is especially innovative, but instead makes up the logical pieces of a good solution, many of which get overlooked. It is targeted at developers/maintainers of performance systems whether new or existing and should help direct decisions toward good long-term solutions. Each recommendation is based upon observation of strong points and shortcomings in various existing performance systems. This is a lessons learnt document as much as a best practice one. In the spirit of sharing, if you are interested in the points being made, please contact the author.
2 Terminology
The following terminology is used: Automation the control and co-ordination system that runs a performance focused test. Tooling the scripts and tools which are run to setup resources, monitor resources or generate load etc. This includes tools that would come packaged with a generic performance automation package as well as those which are specific to a certain product or performance domain. System the set of all machines, tooling and automation involved in a particular problem domain. Product The system or software product that is the actual focus of performance measurements.
1. 2. 3. 4. 5. 6.
Generic automation core Performance focussed automation core and modules Product-specific automation modules (that know about the Product). Generic tooling Performance tooling Product-specific tooling
The only areas a performance test developer needs to program/maintain are the (blue) user domain specific tooling (i.e. things involved directly with performance testing the product you work on) and especially the automation module that handles them. This module is aware of what is being tested, knows how to invoke the commands can run different styles of test and do any other custom actions. Where items are more general, they should be written as reusable modules (eg. STAF services) and made available at a higher level which can be shared by others. More detail on elements making up reusable automation are given next.
constrains. In driving this workload it is desirable to keep track of any constraint conditions in the system. Examples of constraints include: Performance response time > 1 second CPU > 95% Memory usage > 80% Test duration > predicted (or previously observed average) Errors detected A constraint monitor can take action (commonly just to end the test) if a particular rule is broken for more than a defined number of consecutive checks. Without constraint checking, machines can be left in a limbo state which is hard to recover from (particularly with operating system constraints such as paging space). It is often hard to determine where the bottleneck was after an event such as this.
Whilst it is not possible to predict how performance will be measured and the meaning of the variables being observed, templates can demonstrate how to postprocess data from the automation.