HCI - Guidelines and Design Rules
HCI - Guidelines and Design Rules
Interaction
Designing Systems that work for
People
1 Lecture 7- 2008/9
design rules
! Principles of usability
" general understanding
2 Lecture 7- 2008/9
types of design rules
! principles
" abstract design rules
" low authority
" high generality
generality
Guidelines
generality
! standards
" specific design rules
increasing
" high authority
increasing
" limited application Standards
! guidelines
" lower authority
" more general application increasing authority
increasing authority
3 Lecture 7- 2008/9
Principles to support
usability
Learnability
the ease with which new users can begin effective
interaction and achieve maximal performance
Flexibility
the multiplicity of ways the user and system exchange
information
Robustness
the level of support provided the user in determining
successful achievement and assessment of goal-directed
behaviour
4 Lecture 7- 2008/9
Principles of learnability (1)
Predictability
! determining effect of future actions
based on past interaction history
! operation visibility
Synthesizability
! assessingthe effect of past actions
! immediate vs. eventual honesty
5 Lecture 7- 2008/9
Principles of learnability (2)
Familiarity
! how prior knowledge applies to new system
! guessability; affordance
Generalizability
! extending specific interaction knowledge to
new situations
Consistency
! likeness in input/output behaviour arising from
similar situations or task objectives
6 Lecture 7- 2008/9
Principles of flexibility (1)
Dialogue initiative
! freedom from system imposed constraints on input
dialogue
! system vs. user pre-emptiveness
Multithreading
! ability of system to support user interaction for more
than one task at a time
! concurrent vs. interleaving; multimodality
Task migratability
! passing responsibility for task execution between user
and system
7 Lecture 7- 2008/9
Principles of flexibility (2)
Substitutivity
! allowing equivalent values of input and output
to be substituted for each other
! representation multiplicity; equal opportunity
Customizability
! modifiability of the user interface by user
(adaptability) or system (adaptivity)
8 Lecture 7- 2008/9
Principles of robustness (1)
Observability
! ability of user to evaluate the internal state of
the system from its perceivable representation
! browsability; defaults; reachability;
persistence; operation visibility
Recoverability
! ability of user to take corrective action once
an error has been recognized
! reachability; forward/backward recovery;
commensurate effort
9 Lecture 7- 2008/9
Principles of robustness (2)
Responsiveness
! how the user perceives the rate of
communication with the system
! Stability
Task conformance
! degree to which system services support all of
the user's tasks
! task completeness; task adequacy
10 Lecture 7- 2008/9
Standards
! set by national or international bodies to ensure
compliance by a large community of designers
standards require sound underlying theory and slowly
changing technology
11 Lecture 7- 2008/9
Guidelines
12 Lecture 7- 2008/9
Golden rules and heuristics
13 Lecture 7- 2008/9
Shneiderman’s 8 Golden
Rules
1. Strive for consistency
2. Enable frequent users to use shortcuts
3. Offer informative feedback
4. Design dialogs to yield closure
5. Offer error prevention and simple error
handling
6. Permit easy reversal of actions
7. Support internal locus of control
8. Reduce short-term memory load
14 Lecture 7- 2008/9
Norman’s 7 Principles
1. Use both knowledge in the world and
knowledge in the head.
2. Simplify the structure of tasks.
3. Make things visible: bridge the gulfs of
Execution and Evaluation.
4. Get the mappings right.
5. Exploit the power of constraints, both
natural and artificial.
6. Design for error.
7. When all else fails, standardize.
15 Lecture 7- 2008/9
Nielsen's Heuristics
10 Lecture 2- 2008/9
1. Visibility of system
status
11 Lecture 2- 2008/9
What is “reasonable time”?
13 Lecture 2- 2008/9
3. User control and
freedom
14 Lecture 2- 2008/9
10. Help and
documentation
21 Lecture 2- 2008/9
We should wonder…..
22 Lecture 2- 2008/9
4. Consistency and
standards
15 Lecture 2- 2008/9
5. Error prevention
16 Lecture 2- 2008/9
6. Recognition rather
than recall
17 Lecture 2- 2008/9
7. Flexibility and efficiency
of use
18 Lecture 2- 2008/9
8. Aesthetic and
minimalist design
19 Lecture 2- 2008/9
9. Help users recognize,
diagnose, and recover from
errors
SEGMENTATION VIOLATION! Error
#13
20 Lecture 2- 2008/9
Phases of a heuristic
evaluation
1. Pre-evaluation training - give
evaluators needed domain knowledge
and information on the scenario
2. Evaluate interface independently
3. Rate each problem for severity
4. Aggregate results
5. Debrief: Report the results to the
interface designers
23 Lecture 2- 2008/9
Severity ratings
24 Lecture 2- 2008/9
25 Lecture 2- 2008/9
Styles of Heuristic
evaluation
! Problems found by a single inspector
! Problems found by multiple inspectors
! Goal or task?
26 Lecture 2- 2008/9
Problems found by a single
inspector
! Average over six case studies
" 35% of all usability problems;
" 42% of the major problems
" 32% of the minor problems
! Not great, but
" finding some problems with one evaluator is
much better than finding no problems with
no evaluators!
27 Lecture 2- 2008/9
Problems found by a single
inspector
! Varies according to
" difficulty of the interface being evaluated
" the expertise of the inspectors
! Tradeoff
" novices poorer, but cheaper!
28 Lecture 2- 2008/9
Problems found by a single
inspector
! Evaluators
miss both easy
and hard
problems
" ‘best’
evaluators
can miss
easy
problems
" ‘worse’
evaluators
can
discover
hard
problems
29 Lecture 2- 2008/9
Problems found by multiple
evaluators
! 3-5 evaluators find 66-75% of usability problems
" different people find different usability problems
" only modest overlap between the sets of
problems found
30 Lecture 2- 2008/9
Problems found by multiple
evaluators
! Where is the best cost/benefit?
31 Lecture 2- 2008/9
Individuals vs. teams
! Nielsen
" recommends individual evaluators inspect
the interface alone
! Why?
" evaluation is not influenced by others
" independent and unbiased
" greater variability in the kinds of errors found
" no overhead required to organize group
meetings
32 Lecture 2- 2008/9
Self Guided vs. Scenario
Exploration
! Self-guided
" open-ended exploration
" Not necessarily task-directed
" good for exploring diverse aspects of the interface, and to
follow potential pitfalls
! Scenarios
" step through the interface using representative end user
tasks
" ensures problems identified in relevant portions of the
interface
" ensures that specific features of interest are evaluated
" but limits the scope of the evaluation - problems can be
missed
33 Lecture 2- 2008/9
How useful are they?
! Inspection methods are discount methods for
practitioners. They are not rigorous scientific methods.
" All inspection methods are subjective.
" No inspection method can compensate for
inexperience or poor judgement.
" Using multiple analysts results in an inter-subjective
synthesis.
• However, this also
a) raises the false alarm rate, unless a voting system is
applied
b) reduces the hit rate if a voting system is applied!
" Group synthesis of a prioritized problem list seems to
be the most effective current practical approach.
34 Lecture 2- 2008/9