Organization Model For Usability-Testing in ITRACT
Organization Model For Usability-Testing in ITRACT
“We intend to develop and test innovative tools for efficient, user- and environment-friendly
transport networks across the NSR.” [1]
This is one of the main sentences about the goals of ITRACT. In WP5 the goal is defined by:
“The aim of WP5 is to test and evaluate the newly developed solutions for sustainable, user-
friendly transport management.”
One of the first steps in ITRACT is to create an organization model for testing these new and
user-friendly applications.
Usability:
Usability is a well studied field that leads to the DIN EN ISO 9241 standard. Part 110
describes seven dialog principles.
suitability for the task (the dialogue should be suitable for the user’s task and skill
level);
self-descriptiveness (the dialogue should make clear what the user should do next);
controllability (the user should be able to control the pace and sequence of the
interaction);
conformity with user expectations (it should be consistent);
error tolerance (the dialogue should be forgiving);
suitability for individualization (the dialogue should be able to be customized to suit
the user);
suitability for learning (the dialogue should support learning).
Ben Shneiderman also made researches on that field and formulated eight golden rules [4]:
We combined all this principles, rules and heuristics and reduced them to a checklist
developers should recognize while working out the applications. We present some methods
which show, how the checkpoints can be evaluated.
Some methods must be used before the implementation begins. Others go along during the
implementation and some can be done in a last step of the implementation.
Prearrangements:
Personas:
In ITRACT the transport companies defined some personas. These personas are typical
users of the transport system in the belonged region.
For every application target groups should be defined. The main question is: Who shall use
the application? It is not compulsory that the application reaches all defined personas.
A use case is a description of how users will perform tasks on your application. They are
sequences of actions that the system can perform while interacting with the actor. Actors can
be described by personas.
This method is a method that should be used before the implementation starts.
Who is using the Website? => given by personas and target groups.
What does the user want to do?
What is the user's goal?
Card Sorting
Card Sorting is a helpful method to design and evaluate the structure of the application, the
navigation and the wording used by the application. A detailed process is given in “Card
sorting: a definitive guide” by Spencer and Warfel [7].
1. Divide the content and the structure / navigation in singular information units.
2. Write the information units on cards.
3. Find out the proband expectations by questions like:
a. What content do you expect under the navigation term….?
b. Which term would you expect for content about…?
4. In a next step ask the proband to sort the cards by similarity. So you can find out the
possible structure of the application.
Open Sort: Users are asked to sort items into a group and make up their own groups
and give them a name.
Closed Sort: Users sort items into previously defined category names.
Cognitive Walkthrough
This method proves the suitability of learning. Usability experts put themselves in the position
of the user and “walk through” the application. By this method the typical user-problems can
be identified. But it must be said that the cognitive walkthrough appears to detect far more
potential problems than actually exist [9].
The cognitive walkthrough is a time reducing and low cost method because it is not
necessary to find a couple of test persons. This method should be used several times during
the implementation process.
General Test-Criteria
General test criteria are various, but most of them can be done during the realization of the
application. These tests should be repeated in fixed time intervals. Diverse literature
describes many different tests [11],[12],[13],[14]. The most important tests that are easy to
handle are:
Look after the right spelling of the text and error messages.
Pay attention to good error messages. They should be relevant, helpful, informative,
clear, easy to understand, truthful and complete [15].
Investigate the error rate.
When forms must be filled out, the logic of the order and clarity of fields should be
reviewed, so that wrong inputs can be avoided.
Test the reaction time of the application.
Within these tests, smaller problems can be solved directly. Further these tests are simple
with only slightly costs.
For the following methods participants should be engaged. It is necessary to consider that
the participant should be persons of the specified target groups.
Jacob Nielsen describes that 80% of the problems can be revealed by only five participants
[16].
Focus-Groups
In ITRACT the main target groups are elderly people and pupils. This circumstance has been
revealed by the definition of the personas. The main problem of the target groups in ITRACT
could be the contradictions between the target groups. The method “focus groups” is a good
possibility to detect these contradictions. Normally its goal is to collect ideas, understand the
Eye-tracking
Eye-tracking is an improved usability test [17]. With an eye-tracking tool the order of the
observation of objects in the application can be determined. Also the intensity of the
observation of singular objects can be measured.
What elements of my site are perceived by users and which are completely
overlooked?
Are navigation elements recognized as such?
What texts are read and which are only scanned?
Will users guide effectively to the content that is relevant to them?
How fast decides a user to use a navigation point?
How fast recognizes the user important information?
While A/B testing will test different content for one visual element on a page, multivariate
testing will test different content for many elements across one or more pages to identify the
combination of changes that yields the best result.
Every variant should be supported by hypotheses. Otherwise the number of variants is too
large to evaluate them all.
The use of software like Google Website Optimizer (freeware) or similar tools is advised.
Surveys
Surveys can be very different. From multiple choice questions up to scaling systems or open
text answers - everything is possible. To create a questionnaire or opinionaire is a complex
task.
For fast and essential testing it may be adequate to use standardized questionnaires like the
System Usability Scale (SUS) or the Computer System Usability Questionnaire (CSUQ).
The SUS, developed by Brooke [19], reflects a strong need in the usability community for a
tool that could quickly and easily collect a user's subjective rating of a product's usability.
Brooke named the SUS a quick and dirty method, but it is an often used and accepted
usability test method [20].
Ten questions have to be answered by a couple of users during the pilot phase.
4. I think that I would need the support of a technical person to be able to use this
system
7. I would imagine that most people would learn to use this system very quickly
10. I needed to learn a lot of things before I could get going with this system
Scoring:
Results:
The CSUQ developed by Lewis [21] is a questionnaire with 19 questions and a scale of
seven points to answer [22].
The DAkks is a national accreditation agency which develops standardized procedures for
usability tests. The procedures are based on the international standard DIN EN ISO 9241. It
contains well defined different steps. The guidelines are trackable at the homepages of
DAkks [10].
The planned applications are very different in functionality and they also run under different
operation systems and hardware infrastructure. In addition, the applications will be
developed in various locations throughout Europe.
As seen above, the testing is not a one-time process, but a frequently repeated,
accompanying process.
Most of the usability tests can easily be done by the developers. The checklist attached to
this document supports the developers.
Nevertheless, it is reasonable to check the new application by an eye-tracking tool. The Jade
Hochschule owns an eye-tracking system and would like to test up to ten different
applications.
The Checklist
Efficiency
Perform a task analysis Target groups
Personas
Use cases / scenarios undone in process done
Focus groups
Surveys
Reduce workload Use cases / scenarios
Cognitive
walkthrough undone in process done
Focus groups
Usability tests
Offer effective Use cases / scenarios
functions Cognitive
walkthrough undone in process done
Focus groups
Usability-Tests
Surveys
Appropriateness of
tasks
Seclusion of dialogues Cognitive
walkthrough
DAkks test method undone in process done
General test criteria
Usability tests
Offer a self-contained Cognitive
user-interface walkthrough
Focus groups undone in process done
Eyetracking
Multivariate tests
Definition of terms Card sorting
Cognitive
walkthrough undone in process done
Web analysis
General test criteria
Focus groups
Usability tests
Eyetracking
Multivariate tests
Guarantee adequate Target groups
response time for each Personas
target group Use cases / scenarios undone in process done
Cognitive
walkthrough
Focus groups
Usability tests
Surveys
Give feedback Cognitive
walkthrough
DAkks test method undone in process done
General test criteria
Usability tests
Controllability
Set up control functions Personas
Cognitive
walkthrough undone in process done
Focus groups
Usability tests
Consistency
Consistency to provide Target groups
fixed rules and certainty Personas
Card sorting undone in process done
Cognitive
walkthrough
DAkks test method
General test criteria
Focus groups
Usability tests
Fault tolerance
Perfect error-prone Target groups
functions for the target Personas
group to avoid mistakes Use cases / scenarios undone in process done
Cognitive
walkthrough
Focus groups
Usability tests
Permit minimal Use cases / scenarios
correction work Cognitive
walkthrough undone in process done
Focus groups
Usability tests
Customizability
Offer individual and Target groups
relevant information Personas
Use cases / scenarios undone in process done
Cognitive
walkthrough
Focus groups
Usability test
Eyetracking
Surveys
Application adaptable Personas
to users characteristics Cognitive
walkthrough undone in process done
Focus groups
Usability tests
Application adaptable Personas
to previous knowledge Cognitive
walkthrough undone in process done
Focus groups
Usability tests
Eyetracking
Multivariate tests
Surveys
Offer conventional Personas
shortcuts Use cases / scenarios
Cognitive undone in process done
walkthrough
Focus groups
Usability tests
Eyetracking
Suitability for
learning
Support learnable Use cases / scenarios
utilization Cognitive
walkthrough undone in process done
DAkks test methods
General test criteria
Focus groups
Usability tests
Offer complete, clear, Use cases / scenarios
accurate and current Cognitive
manuals walkthrough undone in process done
DAkks test method
General test criteria
Usability tests
Offer precise help Use cases / scenarios
Cognitive
walkthrough undone in process done
Usability tests
Eyetracking
Multivariate tests
Aesthetics
Collaboration of Personas
designers, users and Focus groups
developers undone in process done
Mind the laws of Cognitive
perception walkthrough
General test criteria undone in process done
Eyetracking
Multivariate tests
Create pleasant color Cognitive
spaces walkthrough
Eyetracking undone in process done
Multivariate tests
Mind the laws of Cognitive
typography walkthrough
General test criteria undone in process done
Eyetracking
Multivariate tests
Consider different Personas
display devices Use cases / scenarios
Cognitive undone in process done
walkthrough
DAkks test method
General test criteria
Focus groups
Eyetracking
Surveys
References
[1] Homepage of ITRACT; www.itract.eu; (Retrieved Jan. 2013).
[3] International Organization for Standardization: DIN EN ISO 9241 Part 110;
http://www.iso.org; (Retrieved Feb.2013).
[4] Shneiderman, B.: Designing the user interface: Strategies for effective human-computer
interaction (3rd ed.); Addison-Wesley Publishing.; (1998).
[5] Nielsen, J.: Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection
Method; John Wiley & Sons; (1994).
[9] Wharton, C., Rieman, J., Lewis, C., and Polson, P.; The cognitive walkthrough method: A
practitioner’s guide. In Nielsen, J., and Mack, R. (Eds.), Usability inspection methods.; John
Wiley & Sons, Inc.; (1994).
[11] Courage, C. & Baxter, K.: Understanding Your Users: A Practical Guide to User
Requirements Methods, Tools, and Techniques.; Morgan Kaufmann. (2005).
[13] Albers, M., Still, B. (Eds.): Usability of complex information systems; CRC Press; (2011).
[14] Tullis, T., Albert, B.: Measurement the user experience; Elsevier/Morgan Kaufmann;
(2008).
[15] Grice, H. P.: Logic and Conversation. In Martinich, A.P. (Ed).: Philosophy of Language.
Oxford University Press; (1975).
[17] Nielsen, J., Pernice, K.: Eyetracking web usability; New Riders; (2010).
[19] Brooke, J.; "SUS: a "quick and dirty" usability scale". In P. W. Jordan, B. Thomas, B. A.
Weerdmeester, & A. L. McClelland: Usability Evaluation in Industry.; Taylor and Francis;
(1996).
[20] Sauro, J.: Measuring Usability with the System Usability Scale (SUS);
http://www.measuringusability.com/sus.php; (2009).