0% found this document useful (0 votes)
67 views

Organization Model For Usability-Testing in ITRACT

The document outlines an organization model for usability testing in the ITRACT project. It discusses defining target user groups and personas. Several testing methods are described that can be used during conception and implementation, including use cases, card sorting, and cognitive walkthroughs. General testing criteria like spelling checks and response times are also outlined. The document recommends engaging participants from target groups for focus groups and usability tests to evaluate tasks, satisfaction, and needed improvements.

Uploaded by

jdi badr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Organization Model For Usability-Testing in ITRACT

The document outlines an organization model for usability testing in the ITRACT project. It discusses defining target user groups and personas. Several testing methods are described that can be used during conception and implementation, including use cases, card sorting, and cognitive walkthroughs. General testing criteria like spelling checks and response times are also outlined. The document recommends engaging participants from target groups for focus groups and usability tests to evaluate tasks, satisfaction, and needed improvements.

Uploaded by

jdi badr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

1

Organization Model for usability-testing in ITRACT.

“We intend to develop and test innovative tools for efficient, user- and environment-friendly
transport networks across the NSR.” [1]

This is one of the main sentences about the goals of ITRACT. In WP5 the goal is defined by:

“The aim of WP5 is to test and evaluate the newly developed solutions for sustainable, user-
friendly transport management.”

One of the first steps in ITRACT is to create an organization model for testing these new and
user-friendly applications.

In Gablers Wirtschaftslexikon user-friendliness is defined as an attribute of software-quality.


It is the character of a software-product especially of its interface and dialog-system which
has to be adjusted to the user-requirements. [2]

User-friendliness software means software that is easy to use or which is “usable”.


So software-usability is one of the key topics in WP5.

Usability:

Usability is a well studied field that leads to the DIN EN ISO 9241 standard. Part 110
describes seven dialog principles.

These seven principles are [3]:

 suitability for the task (the dialogue should be suitable for the user’s task and skill
level);
 self-descriptiveness (the dialogue should make clear what the user should do next);
 controllability (the user should be able to control the pace and sequence of the
interaction);
 conformity with user expectations (it should be consistent);
 error tolerance (the dialogue should be forgiving);
 suitability for individualization (the dialogue should be able to be customized to suit
the user);
 suitability for learning (the dialogue should support learning).

Ben Shneiderman also made researches on that field and formulated eight golden rules [4]:

 strive for consistency,


 enable frequent users to use shortcuts,
 offer informative feedback,
 design dialogs to yield closure,
 offer error prevention and simple error handling,
 permit easy reversal of actions,
 support internal locus of control,

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


2

 reduce short-term memory load,

Jacob Nielsen found ten heuristics [5]:

 visibility of system status,


 match between system and the real world,
 user control and freedom,
 consistency and standards,
 error prevention,
 recognition rather than recall,
 flexibility and efficiency of use,
 aesthetic and minimalistic design,
 help users recognize, diagnose, and recover from errors,
 help and documentation.

We combined all this principles, rules and heuristics and reduced them to a checklist
developers should recognize while working out the applications. We present some methods
which show, how the checkpoints can be evaluated.

Some methods must be used before the implementation begins. Others go along during the
implementation and some can be done in a last step of the implementation.

Prearrangements:
Personas:

In ITRACT the transport companies defined some personas. These personas are typical
users of the transport system in the belonged region.

Definition of target groups:

For every application target groups should be defined. The main question is: Who shall use
the application? It is not compulsory that the application reaches all defined personas.

Test- Methods for conception and implementation phases


Use cases

A use case is a description of how users will perform tasks on your application. They are
sequences of actions that the system can perform while interacting with the actor. Actors can
be described by personas.

This method is a method that should be used before the implementation starts.

Each use case should capture following questions:

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


3

 Who is using the Website? => given by personas and target groups.
 What does the user want to do?
 What is the user's goal?

Use cases can be written in an easy-to-understand narrative. This makes it understandable


for all engaged project members [6].

Edward Kenworthy [7] outlines eight steps to develop use cases:

1. Identify who is going to be using the Website.


2. Pick one of those actors.
3. Define what that actor wants to do on the site. Each thing the actor does on the site
becomes a use case.
4. For each use case, decide on the normal course of events when that actor is using
the site.
5. Describe the basic course in the description for the use case. Describe it in terms of
what the actor does and what the system does in response that the actor should be
aware of.
6. When the basic course is described, consider alternate courses of events and add
those to "extend" the use case.
7. Look for commonalities among the use cases. Extract these and note them as
common course use cases.
8. Repeat the steps 2 through 7 for all other actors.

Card Sorting

Card Sorting is a helpful method to design and evaluate the structure of the application, the
navigation and the wording used by the application. A detailed process is given in “Card
sorting: a definitive guide” by Spencer and Warfel [7].

1. Divide the content and the structure / navigation in singular information units.
2. Write the information units on cards.
3. Find out the proband expectations by questions like:
a. What content do you expect under the navigation term….?
b. Which term would you expect for content about…?
4. In a next step ask the proband to sort the cards by similarity. So you can find out the
possible structure of the application.

Card Sorting is possible as an open or a closed sort.

 Open Sort: Users are asked to sort items into a group and make up their own groups
and give them a name.
 Closed Sort: Users sort items into previously defined category names.

Cognitive Walkthrough

This method proves the suitability of learning. Usability experts put themselves in the position
of the user and “walk through” the application. By this method the typical user-problems can

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


4

be identified. But it must be said that the cognitive walkthrough appears to detect far more
potential problems than actually exist [9].
The cognitive walkthrough is a time reducing and low cost method because it is not
necessary to find a couple of test persons. This method should be used several times during
the implementation process.

General Test-Criteria

General test criteria are various, but most of them can be done during the realization of the
application. These tests should be repeated in fixed time intervals. Diverse literature
describes many different tests [11],[12],[13],[14]. The most important tests that are easy to
handle are:

 Look after the right spelling of the text and error messages.
 Pay attention to good error messages. They should be relevant, helpful, informative,
clear, easy to understand, truthful and complete [15].
 Investigate the error rate.
 When forms must be filled out, the logic of the order and clarity of fields should be
reviewed, so that wrong inputs can be avoided.
 Test the reaction time of the application.

Within these tests, smaller problems can be solved directly. Further these tests are simple
with only slightly costs.

Test Methods with participants

For the following methods participants should be engaged. It is necessary to consider that
the participant should be persons of the specified target groups.

It is important that participants from all target groups are involved.

Jacob Nielsen describes that 80% of the problems can be revealed by only five participants
[16].

Focus-Groups

In ITRACT the main target groups are elderly people and pupils. This circumstance has been
revealed by the definition of the personas. The main problem of the target groups in ITRACT
could be the contradictions between the target groups. The method “focus groups” is a good
possibility to detect these contradictions. Normally its goal is to collect ideas, understand the

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


5

reasons of contradictions in understanding the functions or the behavior of the users. Up to


max. 10 persons can discuss in an open group or be interviewed.

Usability tests with participants

Usability testing is a technique to evaluate the applications by testing it with representative


users. In the test, users will try to complete typical tasks while observers (developers and
business experts) watch, listen and take notes.

The goals are

 to evaluate if participants are able to complete identified routine tasks successfully


and how long it takes to do that,
 to find out how satisfied participants are by using your application,
 to identify changes required to improve user performance,

Following points should be considered:

 Let the participants try to complete typical tasks.


 The tasks should be embedded in a context that provides useful information to users.
 Ask the participants to think out loud.
 Test the application, not the participants.
 Keep notes of the behavior and thoughts of the participants.

Eye-tracking

Eye-tracking is an improved usability test [17]. With an eye-tracking tool the order of the
observation of objects in the application can be determined. Also the intensity of the
observation of singular objects can be measured.

By eye-tracking it is possible to get information about the subconscious perception and


information processing.

Mainly following questions can be answered:

 What elements of my site are perceived by users and which are completely
overlooked?
 Are navigation elements recognized as such?
 What texts are read and which are only scanned?
 Will users guide effectively to the content that is relevant to them?
 How fast decides a user to use a navigation point?
 How fast recognizes the user important information?

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


6

Test-Methods for the pilot phase

A/B testing and multivariate testing

While A/B testing will test different content for one visual element on a page, multivariate
testing will test different content for many elements across one or more pages to identify the
combination of changes that yields the best result.

Multivariate testing is often used after publishing an application [18].

Every variant should be supported by hypotheses. Otherwise the number of variants is too
large to evaluate them all.

Multivariate testing can find the optimized appearance of:

 Headings: Try different text, size, color.


 Images: Try different sizes, different images, different positions on a page.
 Buttons: Try different positions on a page, different sizes, colors, labels on the
buttons.
 Forms: try different length of fields, different fieldnames, different order of fields.
 Especially for websites: try different background colors, different sizes of headlines,
positions of logos, position of login, search fields, navigationbars.

The use of software like Google Website Optimizer (freeware) or similar tools is advised.

Surveys

Surveys can be very different. From multiple choice questions up to scaling systems or open
text answers - everything is possible. To create a questionnaire or opinionaire is a complex
task.

For fast and essential testing it may be adequate to use standardized questionnaires like the
System Usability Scale (SUS) or the Computer System Usability Questionnaire (CSUQ).

The SUS, developed by Brooke [19], reflects a strong need in the usability community for a
tool that could quickly and easily collect a user's subjective rating of a product's usability.
Brooke named the SUS a quick and dirty method, but it is an often used and accepted
usability test method [20].

Ten questions have to be answered by a couple of users during the pilot phase.

1. I think that I would like to use this system frequently

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


7

2. I found the system unnecessarily complex

3. I thought the system was easy to use

4. I think that I would need the support of a technical person to be able to use this
system

5. I found the various functions in this system were well integrated

6. I thought there was too much inconsistency in this system

7. I would imagine that most people would learn to use this system very quickly

8. I found the system very cumbersome to use

9. I felt very confident using the system

10. I needed to learn a lot of things before I could get going with this system

Every question can be answered on a scale from 1 to 5 points “I strongly disagree” up to “I


strongly agree”.

Scoring:

For odd items: subtract one from the user response.

 For even-numbered items: subtract the user responses from 5


 This scales all values from 0 to 4 (with four being the most positive response).
 Add up the converted responses for each user and multiply that total by 2.5. This
converts the range of possible values from 0 to 100 instead of from 0 to 40.

Results:

 100 Points correspond to a perfect System without any usability problems.


 Values greater the 80 points correspond to a good usability.
 Values between 60 and 80 points are satisfactory.
 Values lower than 60 indicate significant problems.

The CSUQ developed by Lewis [21] is a questionnaire with 19 questions and a scale of
seven points to answer [22].

The SUS or CSUQ questionnaire could be implemented in the pilot applications. An


environment for the analysis must be worked out by the developers.

Closing Methods and certification

DAkks (Deutsche Akkreditierungsstelle GmbH)

The DAkks is a national accreditation agency which develops standardized procedures for
usability tests. The procedures are based on the international standard DIN EN ISO 9241. It

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


8

contains well defined different steps. The guidelines are trackable at the homepages of
DAkks [10].

A certification by DAkks would be very valuably for an application, but nevertheless a


certification by DAkks may be charged.

The usability testing procedure in ITRACT

The planned applications are very different in functionality and they also run under different
operation systems and hardware infrastructure. In addition, the applications will be
developed in various locations throughout Europe.

As seen above, the testing is not a one-time process, but a frequently repeated,
accompanying process.

Most of the usability tests can easily be done by the developers. The checklist attached to
this document supports the developers.

Nevertheless, it is reasonable to check the new application by an eye-tracking tool. The Jade
Hochschule owns an eye-tracking system and would like to test up to ten different
applications.

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


9

The Checklist

Criteria Evaluation method State of work


Effectiveness
Identify users goals  Target groups
 Personas
 Use cases / scenarios undone in process done
 Focus groups
Provide precise  Use cases / scenarios
information and  Cognitive
extensive help walkthrough undone in process done
 Usability tests
 Eyetracking
 Surveys
Create a good  Card sorting
information structure  DAkks test method
undone in process done
Offer useful and  Target groups
constructive functions  Personas
 Use cases / scenarios undone in process done
 Cognitive
walkthrough
 Focus groups
 Usability tests
 Surveys

Efficiency
Perform a task analysis  Target groups
 Personas
 Use cases / scenarios undone in process done
 Focus groups
 Surveys
Reduce workload  Use cases / scenarios
 Cognitive
walkthrough undone in process done
 Focus groups
 Usability tests
Offer effective  Use cases / scenarios
functions  Cognitive
walkthrough undone in process done
 Focus groups
 Usability-Tests
 Surveys

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


10

Criteria Evaluation method State of work


Guarantee orientation  Card sorting
 Cognitive
walkthrough undone in process done
 Focus groups
 Usability tests
 Eyetracking
The most important  Cognitive
first walkthrough
 Focus groups undone in process done
 Usability tests
 Eyetracking

Appropriateness of
tasks
Seclusion of dialogues  Cognitive
walkthrough
 DAkks test method undone in process done
 General test criteria
 Usability tests
Offer a self-contained  Cognitive
user-interface walkthrough
 Focus groups undone in process done
 Eyetracking
 Multivariate tests
Definition of terms  Card sorting
 Cognitive
walkthrough undone in process done
 Web analysis
 General test criteria
 Focus groups
 Usability tests
 Eyetracking
 Multivariate tests
Guarantee adequate  Target groups
response time for each  Personas
target group  Use cases / scenarios undone in process done
 Cognitive
walkthrough
 Focus groups
 Usability tests
 Surveys
Give feedback  Cognitive
walkthrough
 DAkks test method undone in process done
 General test criteria
 Usability tests

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


11

Criteria Evaluation method State of work


Confirmation
Give feedback for every  Cognitive
step walkthrough
 DAkks test method undone in process done
 General test criteria
 Usability tests
Provide clear feedback  Cognitive
walkthrough
 Usability tests undone in process done
 Multivariate tests
 Surveys
Adapt type and extend  Use cases / scenarios
of a feedback to the  Cognitive
task walkthrough undone in process done
 Focus groups
 Usability tests
 Multivariate tests
 Surveys
Give personal feedback  Personas
 Use cases / scenarios
 Cognitive undone in process done
walkthrough
 Focus groups
 Usability tests
 Multivariate tests
 Surveys
Give acoustic or visual  Use cases / scenarios
feedback  Cognitive
walkthrough undone in process done
 Focus groups
 Usability tests
 Eyetracking
 Multivariate tests
 Surveys

Controllability
Set up control functions  Personas
 Cognitive
walkthrough undone in process done
 Focus groups
 Usability tests

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


12

Criteria Evaluation method State of work


Offer emergency exits  Use cases / scenarios
 Cognitive
walkthrough undone in process done
 DAkks test method
 Usability tests
 Eyetracking
Support explorative  Use cases / scenarios
learning  Cognitive
walkthrough undone in process done
 Focus groups
 Usability tests
 Eyetracking
 Multivariate tests
Suggestibility of speed  Personas
 Use cases / scenarios
 Cognitive undone in process done
walkthrough
 Usability tests
Opportunity to choose  Use cases / scenarios
between different work  Cognitive
equipment walkthrough undone in process done
 Focus groups
 Usability tests
Support experienced  Personas
users  Use cases / scenarios
 Cognitive undone in process done
walkthrough
 Usability tests

Consistency
Consistency to provide  Target groups
fixed rules and certainty  Personas
 Card sorting undone in process done
 Cognitive
walkthrough
 DAkks test method
 General test criteria
 Focus groups
 Usability tests

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


13

Criteria Evaluation method State of work


Provide expectation  Personas
compliant information  Use cases / scenarios
structure  Card sorting undone in process done
 Cognitive
walkthrough
 Focus groups
 Usability tests
 Eyetracking
Mind design standards  Cognitive
and conventions walkthrough
 Focus groups undone in process done
 Usability tests
 Eyetracking
 Multivariate tests
Consistency and  Card sorting
conformity with user  Cognitive
expectations of terms walkthrough undone in process done
 General test criteria
 Usability tests
 Eyetracking
 Multivariate tests
Predictable  Use cases / scenarios
performance of tasks  Cognitive
walkthrough undone in process done
 DAkks test method
 Usability tests
 Eyetracking
Design of a complex  Focus groups
and detailed style guide
undone in process done

Fault tolerance
Perfect error-prone  Target groups
functions for the target  Personas
group to avoid mistakes  Use cases / scenarios undone in process done
 Cognitive
walkthrough
 Focus groups
 Usability tests
Permit minimal  Use cases / scenarios
correction work  Cognitive
walkthrough undone in process done
 Focus groups
 Usability tests

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


14

Criteria Evaluation method State of work


Give constructive error  Use cases / scenarios
messages  Cognitive
walkthrough undone in process done
 General test criteria
 Usability tests
 Multivariate tests
Expectation compliant  Personas
design of errors  Cognitive
walkthrough undone in process done
 Usability tests
 Eyetracking
 Multivariate tests

Customizability
Offer individual and  Target groups
relevant information  Personas
 Use cases / scenarios undone in process done
 Cognitive
walkthrough
 Focus groups
 Usability test
 Eyetracking
 Surveys
Application adaptable  Personas
to users characteristics  Cognitive
walkthrough undone in process done
 Focus groups
 Usability tests
Application adaptable  Personas
to previous knowledge  Cognitive
walkthrough undone in process done
 Focus groups
 Usability tests
 Eyetracking
 Multivariate tests
 Surveys
Offer conventional  Personas
shortcuts  Use cases / scenarios
 Cognitive undone in process done
walkthrough
 Focus groups
 Usability tests
 Eyetracking

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


15

Criteria Evaluation method State of work


Support customizable  Personas
information  Use cases / scenarios
presentation and input  Cognitive undone in process done
devices walkthrough
 Focus groups
 Usability tests
 Eyetracking
 Multivariate tests
 Surveys

Suitability for
learning
Support learnable  Use cases / scenarios
utilization  Cognitive
walkthrough undone in process done
 DAkks test methods
 General test criteria
 Focus groups
 Usability tests
Offer complete, clear,  Use cases / scenarios
accurate and current  Cognitive
manuals walkthrough undone in process done
 DAkks test method
 General test criteria
 Usability tests
Offer precise help  Use cases / scenarios
 Cognitive
walkthrough undone in process done
 Usability tests
 Eyetracking
 Multivariate tests

Relief of short term


memory
Reduce number of  Card sorting
options  Cognitive
walkthrough undone in process done
 Usability tests
 Eyetracking
 Multivariate tests
Allow rapid  Cognitive
identification of walkthrough
objects, actions and  General test criteria undone in process done
options  Usability tests
 Eyetracking
 Multivariate tests

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


16

Criteria Evaluation method State of work


Provide minimalist  Cognitive
design and relevant walkthrough
information  Usability tests undone in process done
 Eyetracking
 Multivariate tests
Use concise language  Card sorting
 Cognitive
walkthrough undone in process done
 General test criteria
 Usability tests
 Eyetracking
 Multivariate tests

Aesthetics
Collaboration of  Personas
designers, users and  Focus groups
developers undone in process done
Mind the laws of  Cognitive
perception walkthrough
 General test criteria undone in process done
 Eyetracking
 Multivariate tests
Create pleasant color  Cognitive
spaces walkthrough
 Eyetracking undone in process done
 Multivariate tests
Mind the laws of  Cognitive
typography walkthrough
 General test criteria undone in process done
 Eyetracking
 Multivariate tests
Consider different  Personas
display devices  Use cases / scenarios
 Cognitive undone in process done
walkthrough
 DAkks test method
 General test criteria
 Focus groups
 Eyetracking
 Surveys

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


17

References
[1] Homepage of ITRACT; www.itract.eu; (Retrieved Jan. 2013).

[2] Gabler Wirtschaftslexikon: Definition;


http://wirtschaftslexikon.gabler.de/Archiv/75615/benutzerfreundlichkeit-v5.html; (Retrieved
Feb. 2013).

[3] International Organization for Standardization: DIN EN ISO 9241 Part 110;
http://www.iso.org; (Retrieved Feb.2013).

[4] Shneiderman, B.: Designing the user interface: Strategies for effective human-computer
interaction (3rd ed.); Addison-Wesley Publishing.; (1998).

[5] Nielsen, J.: Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection
Method; John Wiley & Sons; (1994).

[6] Carroll, J.M.; Rosson, M.B.: Usability Engineering. Scenario-Based Development of


Human-Computer Interaction.; Morgan Kaufmann. (2001).

[7] Kenworthy, E.: Use case modeling: Capturing user requirements.;


http://www.zoo.co.uk/~z0001039/PracGuides/pg_use_cases.htm; (1997).

[8] Spencer, D. and Warfel, T.; Card Sorting: a definitive guide;


http://boxesandarrows.com/card-sorting-a-definitive-guide/; (2004).

[9] Wharton, C., Rieman, J., Lewis, C., and Polson, P.; The cognitive walkthrough method: A
practitioner’s guide. In Nielsen, J., and Mack, R. (Eds.), Usability inspection methods.; John
Wiley & Sons, Inc.; (1994).

[10] Homepage DAkks: Leitfaden Usability; http://www.dakks.de; (1994).

[11] Courage, C. & Baxter, K.: Understanding Your Users: A Practical Guide to User
Requirements Methods, Tools, and Techniques.; Morgan Kaufmann. (2005).

[12] Barnum, C. M.: Usability testing essentials; Elsevier Inc.; (2011).

[13] Albers, M., Still, B. (Eds.): Usability of complex information systems; CRC Press; (2011).

[14] Tullis, T., Albert, B.: Measurement the user experience; Elsevier/Morgan Kaufmann;
(2008).

[15] Grice, H. P.: Logic and Conversation. In Martinich, A.P. (Ed).: Philosophy of Language.
Oxford University Press; (1975).

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013


18

[16] Nielsen, J.: Website; http://www.nngroup.com/articles/why-you-only-need-to-test-with-5-


users/; (Retrieved Feb. 2013).

[17] Nielsen, J., Pernice, K.: Eyetracking web usability; New Riders; (2010).

[18] Eberhard-Yom, M.: Usability als Erfolgsfaktor; Cornelsen Verlag; (2010).

[19] Brooke, J.; "SUS: a "quick and dirty" usability scale". In P. W. Jordan, B. Thomas, B. A.
Weerdmeester, & A. L. McClelland: Usability Evaluation in Industry.; Taylor and Francis;
(1996).

[20] Sauro, J.: Measuring Usability with the System Usability Scale (SUS);
http://www.measuringusability.com/sus.php; (2009).

[21] Lewis, J. R.: IBM Computer Usability Satisfaction Questionnaires: Psychometric


Evaluation and Instructions for Use. International Journal of Human-Computer Interaction,
7:1, 57-78.; (1995).

[22] Human-Computer Interaction Resources: The Computer System Usability


Questionnaire; http://hcibib.org/perlman/question.cgi; (Retrieved Feb.2013).

Theres Gniwotta, Knut Barghorn, FB MIT, Jade Hochschule February 2013

You might also like