0% found this document useful (0 votes)
8 views

HCI Module 6

Validation in HCI is the process of ensuring that a design meets user needs and intended goals through methods like usability testing and user feedback. The document outlines various validation techniques, including usability testing, interface testing, and heuristic evaluation, emphasizing the importance of iterative testing and improvement. It also details specific testing processes and guidelines to enhance user experience and interface functionality.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

HCI Module 6

Validation in HCI is the process of ensuring that a design meets user needs and intended goals through methods like usability testing and user feedback. The document outlines various validation techniques, including usability testing, interface testing, and heuristic evaluation, emphasizing the importance of iterative testing and improvement. It also details specific testing processes and guidelines to enhance user experience and interface functionality.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 99

Validation

MODULE 6
Validation
• Validation in HCI refers to the process of assessing
whether a design meets its intended goals and
satisfies the needs of its users.
• It is a critical phase in the design process aimed at
ensuring that the developed system or interface is
effective, efficient, and user-friendly.
• Involves gathering feedback from users or
stakeholders through various methods such as
usability testing, user interviews, surveys, and
expert evaluations.
Validation Techniques
• Usability Testing
• User Interviews
• Surveys
• Expert Evaluations
• Heuristic Evaluation
• Cognitive Walkthrough
• A/B Testing
• Prototype Testing
• Field Studies
Usability Testing

• Usability testing is the practice of testing how


easy a design is to use with a group of
representative users.

• It usually involves observing users as they attempt


to complete tasks and can be done for different
types of designs.
Usability Testing – Iterative Process

• 1) Determine whether testers can complete tasks


successfully and independently.

• 2) Assess their performance and mental state as they try


to complete tasks, to see how well your design works.

• 3) See how much users enjoy using it.

• 4) Identify problems and their severity.

• 5) Find solutions.
Usability Testing Methods

• In-person testing

• Remote testing

• Guerilla testing
Usability Testing Process
1. Plan –
a. Define what you want to test.
• Ask yourself questions about your design/product.
• What aspect/s of it do you want to test?
• You can make a hypothesis from each answer.
• With a clear hypothesis, you’ll have the exact aspect you
want to test.
b. Decide how to conduct your test – e.g., remotely.
• Define the scope of what to test (e.g., navigation) and stick to
it throughout the test.
• When you test aspects individually, you’ll eventually build a
broader view of how well your design works overall.
Usability Testing Process
2) Set user tasks –

a. Prioritize the most important tasks to meet


objectives (e.g., complete checkout)
• no more than 5 per participant. Allow a 60-minute timeframe.

b. Clearly define tasks with realistic goals.

c. Create scenarios where users can try to use the design


naturally. That means you let them get to grips with it on their
own rather than direct them with instructions.
Usability Testing Process
3) Recruit testers – Know who your users are as a target group.
• Use screening questionnaires (e.g., Google Forms) to find suitable
candidates.
• You can advertise and offer incentives. You can also find contacts
through community groups, etc.
• If you test with only 5 users, you can still reveal 85% of core issues.
4) Facilitate/Moderate testing – Set up testing in a suitable
environment.
• Observe and interview users. Notice issues.
• See if users fail to see things, go in the wrong direction or misinterpret
rules. When you record usability sessions, you can more easily count the
number of times users become confused.
• Ask users to think aloud and tell you how they feel as they go
through the test.
Usability Test Guidelines
1) Assess user behaviour – Use these metrics:
• Quantitative – time users take on a task, success and
failure rates, effort (how many clicks users take, instances
of confusion, etc.)
• Qualitative – users’ stress responses (facial reactions,
body-language changes, squinting, etc.), subjective
satisfaction (which they give through a post-test
questionnaire) and perceived level of effort/difficulty
2) Create a test report – Review video footage and
analyzed data. Clearly define design issues and best
practices. Involve the entire team.
Interface Testing

• Interface testing in Human-Computer Interaction


(HCI) focuses on evaluating the usability,
functionality, and overall user experience of a
computer interface.

• Essential for ensuring that the interface meets the


needs and expectations of its users
Interface Testing Process
1. Usability Testing:
• Usability testing involves observing users as they interact with the
interface to identify usability issues and gather feedback on the
user experience.
• Test participants are given tasks to perform, and their interactions
are closely monitored.
• This helps in assessing how easily users can navigate the interface,
complete tasks, and achieve their goals.
2. Navigation Testing:
• Navigation testing focuses on evaluating the effectiveness of the
interface's navigation structure.
• Testers assess whether users can easily find their way around the
interface, access different features or sections, and understand the
organization of content.
Interface Testing Process
3. Functionality Testing:
• Functionality testing verifies that all features and
functionalities of the interface work as intended.
• Testers systematically test each feature, button, link, or
interactive element to ensure that they perform their intended
actions without errors or unexpected behaviour.
4. Compatibility Testing:
• Compatibility testing assesses the interface's performance
across different devices, browsers, and operating systems.
• Testers verify that the interface functions correctly and
maintains consistency in appearance and behaviour across
various platforms.
Interface Testing Process
5. Responsive Design Testing:
• For interfaces designed to be responsive, testers evaluate how
the interface adapts to different screen sizes and orientations.
• This involves testing the interface on various devices, such as
desktops, laptops, tablets, and smartphones, to ensure a
seamless user experience across different screen sizes.
6. Accessibility Testing:
• Accessibility testing focuses on ensuring that the interface is
usable by individuals with disabilities.
• Testers assess factors such as screen reader compatibility,
keyboard navigation, colour contrast, and the availability of
alternative text for images to ensure that the interface is
accessible to all users.
Interface Testing Process
7. Performance Testing:
• Performance testing evaluates the interface's speed,
responsiveness, and stability under different usage conditions.
• Testers measure factors such as page load times, response times to
user interactions, and the interface's ability to handle concurrent
user sessions to identify and address performance bottlenecks.
8. Security Testing:
• Security testing assesses the interface's vulnerability to security
threats, such as unauthorized access, data breaches, or malicious
attacks.
• Testers evaluate the effectiveness of security measures such as
encryption, authentication, and authorization to ensure that user
data is protected.
Interface Testing Process
9. User Feedback Collection:
• In addition to conducting objective tests, testers may also
gather qualitative feedback from users through surveys,
interviews, or feedback forms.
• This feedback provides valuable insights into users' preferences,
opinions, and suggestions for improving the interface.
10. Iterative Testing and Improvement:
• Interface testing is an iterative process, with designers using the
findings from testing to make improvements to the interface.
• Designers continuously iterate on the interface design based on
user feedback and testing results to enhance usability,
functionality, and overall user experience.
Example
• Scenario: A company has developed a new e-
commerce website aimed at selling clothing and
accessories online. The interface testing is
conducted to ensure that the website meets
usability standards, functions properly across
different devices and browsers, and provides a
positive user experience.
Example
• Usability Testing:
• Objective: Evaluate how easily users can navigate the website, search for
products, add items to their cart, and complete the checkout process.
• Test Scenario: Ask participants to find a specific product, add it to their cart, and
proceed through the checkout process while thinking aloud.
• Evaluation: Observe users' interactions, note any difficulties or frustrations they
encounter, and collect feedback on the overall user experience.
• Navigation Testing:
• Objective: Assess the effectiveness of the website's navigation structure in
helping users find products and information.
• Test Scenario: Task participants with browsing different product categories, using
the search function to find specific items, and accessing information pages such
as the FAQ or contact page.
• Evaluation: Evaluate how easily participants can find their way around the
website, whether navigation labels are clear and intuitive, and if any navigation
issues arise.
Example
• Functionality Testing:
• Objective: Verify that all website features and functionalities work as
intended without errors.
• Test Scenario: Test each feature, such as product search, product filtering,
adding items to the cart, applying discount codes, and completing the
checkout process.
• Evaluation: Verify that all features function correctly, without unexpected
errors or issues, and that users can complete tasks seamlessly.
• Compatibility Testing:
• Objective: Ensure the website functions consistently across different devices
and browsers.
• Test Scenario: Test the website on various devices (desktop, laptop, tablet,
smartphone) and browsers (Chrome, Firefox, Safari, Edge) to verify
compatibility.
• Evaluation: Ensure that the website layout, functionality, and performance
remain consistent across different devices and browsers, without any
significant issues or discrepancies.
Example
• Responsive Design Testing:
• Objective: Assess how the website adapts to different screen sizes and
orientations.
• Test Scenario: Test the website on various devices with different screen
sizes and resolutions.
• Evaluation: Verify that the website layout adjusts appropriately to
different screen sizes, ensuring readability, usability, and functionality on
all devices.
• Accessibility Testing:
• Objective: Ensure the website is accessible to users with disabilities.
• Test Scenario: Use screen reader software to navigate the website, test
keyboard navigation, and assess colour contrast and text readability.
• Evaluation: Verify that the website meets accessibility standards,
including compatibility with screen readers, keyboard accessibility, and
adherence to colour contrast guidelines.
Example
• Performance Testing:
• Objective: Evaluate the website's speed, responsiveness, and stability.
• Test Scenario: Measure page load times, response times to user
interactions, and the website's ability to handle concurrent user
sessions.
• Evaluation: Identify any performance bottlenecks or issues affecting
the website's speed, responsiveness, or stability and take measures to
optimize performance.
• Security Testing:
• Objective: Assess the website's vulnerability to security threats.
• Test Scenario: Perform security scans, test for common vulnerabilities
(e.g., SQL injection, cross-site scripting), and verify the effectiveness of
encryption and authentication mechanisms.
• Evaluation: Identify and address security vulnerabilities to protect user
data and ensure the website's security integrity.
Example
• User Feedback Collection:
• Objective: Gather qualitative feedback from users on their
experience with the website.
• Test Scenario: Conduct surveys, interviews, or feedback forms to
collect users' opinions, preferences, and suggestions for
improvement.
• Evaluation: Analyze user feedback to identify areas for improvement
and address any usability issues or concerns raised by users.
• Iterative Testing and Improvement:
• Objective: Continuously iterate on the website design based on
testing results and user feedback.
• Evaluation: Use the findings from interface testing to make iterative
improvements to the website, addressing usability issues, optimizing
performance, enhancing accessibility, and strengthening security.
Heuristic evaluation

• Heuristic evaluation is a process where experts use


rules of thumb to measure the usability of user
interfaces in independent walkthroughs and
report issues.

• Evaluators use established heuristics (e.g., Nielsen-


Molich’s) and reveal insights that can help design teams
enhance product usability from early in development.
10 Usability Heuristics by Jakob
Nielsen
1: Visibility of System Status
2: Match Between the System and the Real World
3: User Control and Freedom
4: Consistency and Standards
5: Error Prevention
6: Recognition Rather than Recall
7: Flexibility and Efficiency of Use
8: Aesthetic and Minimalist Design
9: Help Users Recognize, Diagnose, and Recover from Errors
10: Help and Documentation
Visibility of System Status
• The design should always keep users informed
about what is going on, through appropriate
feedback within a reasonable amount of time.
• When users know the current system status,
they learn the outcome of their prior interactions
and determine next steps.
• Predictable interactions create trust in the
product as well as the brand.
Visibility of System Status
• Communicate clearly to users
what the system’s state is — no
action with consequences to
users should be taken without
informing them.

• Present feedback to the user as


quickly as possible (ideally,
immediately).

• Build trust through open


and continuous communication.
Match between system and the
real world
• The system should speak the users' language, with
words, phrases and concepts familiar to the user, rather
than system-oriented terms.
• Follow real-world conventions, making information
appear in a natural and logical order
• The way you should design depends very much on your
specific users.
• Terms, concepts, icons, and images that seem perfectly
clear to you and your colleagues may be unfamiliar or
confusing to your users.
Match between system and the
real world
• Ensure that users can understand meaning without having to
go look up a word’s definition.
• Never assume your understanding of words or concepts will
match that of your users.
• User research will uncover your users' familiar terminology, as
well as their mental models around important concepts.
User control and freedom
• Users often choose system functions by mistake and will
need a clearly marked "emergency exit" to leave the
unwanted state without having to go through an extended
dialogue.
• Support undo and redo.
User control and freedom
• Support Undo and Redo.
• Show a clear way to exit the current interaction, like a
Cancel button.
• Make sure the exit is clearly labeled and discoverable.
Consistency and standard

• Users should not have to wonder whether different words,


situations, or actions mean the same thing. Follow platform
conventions.

• Failing to maintain consistency may increase the users'


cognitive load by forcing them to learn something new.

• Improve learnability by maintaining both types of


consistency: internal and external.
Consistency and standard
• Internal consistency relates to consistency within a product
or a family of products, either within a single application or
across a family or suite of applications.
Consistency and standard
• External consistency refers to established conventions in
an industry or on the web at large, beyond one application
or family of applications.
Error prevention
• Minimize the likelihood of making mistakes

• Good error messages are important, but the best designs


carefully prevent problems from occurring in the first place.

• Either eliminate error-prone conditions, or check for them


and present users with a confirmation option before they
commit to the action.
Error prevention
Error prevention

• Prioritize your effort: Prevent high-cost


errors first, then little frustrations.
• Avoid slips by providing helpful constraints
and good defaults.
• Prevent mistakes by removing memory
burdens, supporting undo, and warning
your users.
Error prevention
Recognition rather than recall
• Minimize the user's memory load by making elements,
actions, and options visible.

• The user should not have to remember information


from one part of the interface to another.

• Information required to use the design (e.g. field labels


or menu items) should be visible or easily retrievable
when needed.
Recognition rather than recall
Recognition rather than recall

• Let people recognize information in the interface,


rather than forcing them to remember (“recall”) it.

• Offer help in context, instead of giving users a long


tutorial to memorize.

• Reduce the information that users have to remember.


Recognition rather than recall
Flexibility and efficiency of use
• Shortcuts — hidden from novice users — may speed up the
interaction for the expert user so that the design can cater to
both inexperienced and experienced users. Allow users to
tailor frequent actions.
• Flexible processes can be carried out in different ways, so that
people can pick whichever method works for them.
• Provide accelerators like keyboard shortcuts and touch
gestures.
• Provide personalization by tailoring content and functionality
for individual users.
• Allow for customization, so users can make selections about
how they want the product to work.
Flexibility and efficiency of use
An aesthetic and minimalist
design
• Interfaces should not contain information that is irrelevant
or rarely needed.
• Every extra unit of information in an interface competes
with the relevant units of information and diminishes their
relative visibility.
• Keep the content and visual design of UI focused on the
essentials.
• Don't let unnecessary elements distract users from the
information they really need.
• Prioritize the content and features to support primary goals.
An aesthetic and minimalist
design
Help users recognize, diagnose,
and recover from errors
• Error messages should be expressed in plain language (no
error codes), precisely indicate the problem, and
constructively suggest a solution.
• These error messages should also be presented with visual
treatments that will help users notice and recognize them.
Help and documentation
• It’s best if the system doesn’t need any additional
explanation. However, it may be necessary to provide
documentation to help users understand how to
complete their tasks.

• Help and documentation content should be easy to


search and focused on the user's task. Keep it concise,
and list concrete steps that need to be carried out.
Help and documentation
How to Conduct a Heuristic
Evaluation?
1.Know what to test and how
• Whether it’s the entire product or one procedure, clearly define the parameters of what to test and the objective.
2.Know your users and have clear definitions of the target audience’s goals,
contexts, etc.
• User personas can help evaluators see things from the users’ perspectives.
3.Select 3–5 evaluators, ensuring their expertise in usability and the relevant industry.
4.Define the heuristics (around 5–10)
• This will depend on the nature of the system/product/design.
5.Brief evaluators on what to cover in a selection of tasks, suggesting a scale of
severity codes (e.g., critical) to flag issues.
6.1st Walkthrough
• Have evaluators use the product freely so they can identify elements to analyze.
7.2nd Walkthrough
• Evaluators scrutinize individual elements according to the heuristics. They also examine how these fit into the
overall design, clearly recording all issues encountered.
8.Debrief evaluators in a session so they can collate results for analysis and suggest
fixes.
User Acceptance Testing
• User acceptance testing (UAT) in Human-Computer
Interaction (HCI) refers to the process of evaluating
a system or interface to determine whether it
meets the needs and expectations of its intended
users.

• Unlike other forms of testing, which focus on


technical aspects such as functionality and
performance, UAT specifically assesses the system
from the perspective of the end users.
User Acceptance Testing
• Define Acceptance Criteria:
• Before conducting UAT, it's essential to establish clear acceptance
criteria that define what constitutes a successful outcome. These
criteria should be based on the user requirements and goals of the
system or interface.
• Select Test Participants:
• Identify a representative sample of end users who will participate in
the UAT process. These users should match the target demographic
for the system or interface and have a vested interest in its success.
• Develop Test Scenarios:
• Create realistic scenarios or tasks that reflect common activities users
would perform when interacting with the system or interface. These
scenarios should cover a range of functionalities and workflows.
User Acceptance Testing
• Conduct the Test:
• Provide participants with access to the system or interface and ask them to
complete the predefined test scenarios. Encourage participants to provide
feedback on their experiences, including any issues encountered, areas of
confusion, or suggestions for improvement.
• Observe and Document:
• Observe participants as they complete the test scenarios and take note of any
usability issues or challenges they encounter. Document both quantitative
data, such as completion rates and error frequencies, and qualitative feedback
provided by participants.
• Collect User Feedback:
• After completing the test scenarios, conduct debriefing sessions with
participants to gather additional feedback. Ask open-ended questions about
their overall impressions of the system, specific likes or dislikes, and areas
where they feel improvements are needed.
User Acceptance Testing
• Iterate and Improve:
• Use the feedback collected during UAT to identify areas for improvement and
make iterative changes to the system or interface. Prioritize enhancements
based on the severity of usability issues and the impact on user experience.
• Repeat as Needed:
• Depending on the scope and complexity of the system, multiple rounds of
UAT may be necessary to address all usability concerns and ensure that the
system meets user expectations. Iterate on the testing process as needed to
incorporate feedback and refine the system further.
• Final Acceptance Decision:
• Once the necessary improvements have been made based on user feedback,
conduct a final round of testing to verify that the changes have been
effective. If the system meets the acceptance criteria and satisfies user
needs, it can be approved for release or deployment.
Example

• Scenario: A social media company has redesigned its

platform to improve user engagement and satisfaction.

The UAT process aims to ensure that the redesigned

platform meets the needs and expectations of its users

before it is rolled out to the broader user base.


Example
• Define Acceptance Criteria:
• The acceptance criteria are established based on user feedback,
usability goals, and key performance indicators (KPIs) such as
increased user engagement and reduced bounce rates.
• Criteria include aspects like intuitive navigation, improved content
discovery, faster load times, and enhanced privacy controls.
• Select Test Participants:
• A diverse group of users is recruited to participate in the UAT
process, including regular users of the social media platform across
different demographics, age groups, and usage patterns.
• Participants should represent the target audience for the redesigned
platform and have varying levels of experience with social media.
Example
• Develop Test Scenarios:
• Test scenarios are created to cover a range of typical user
interactions and tasks on the social media platform.
• Scenarios may include tasks such as creating and sharing posts,
exploring recommended content, interacting with friends' posts,
adjusting privacy settings, and navigating the redesigned interface.
• Conduct the Test:
• Participants are provided with access to the redesigned platform and
asked to complete the predefined test scenarios.
• They are encouraged to explore the platform freely, interact with
various features, and provide feedback on their experiences as they
navigate through the tasks.
Example
• Observe and Document:
• Test facilitators observe participants as they complete the test
scenarios, noting any usability issues, errors, or areas of confusion.
• Quantitative data, such as task completion rates and time taken to
complete tasks, are recorded along with qualitative feedback
provided by participants.
• Collect User Feedback:
• After completing the test scenarios, participants are invited to share
their thoughts and opinions on the redesigned platform.
• They are asked about their overall impressions, likes and dislikes,
areas for improvement, and whether they feel the redesigned
platform meets their needs and expectations.
Example
• Iterate and Improve:
• Feedback collected during UAT is analyzed to identify common themes, usability issues,
and areas for improvement.
• Designers iterate on the platform based on user feedback, making adjustments to
address usability concerns, improve user experience, and align with acceptance
criteria.
• Repeat as Needed:
• Multiple rounds of UAT may be conducted as the platform evolves, with each round
incorporating feedback from previous tests to refine the design further.
• The UAT process is iterative, allowing for continuous improvement until the redesigned
platform meets the acceptance criteria and user expectations.
• Final Acceptance Decision:
• Once the redesigned platform meets the acceptance criteria and user feedback
indicates high levels of satisfaction, it is approved for release to the broader user base.
• Any outstanding issues or feedback from UAT are addressed before the official rollout to
ensure a successful launch.
Evaluation Techniques
• Evaluation
• tests usability and functionality of system
• occurs in laboratory, field and/or in collaboration with
users
• evaluates both design and implementation
• should be considered at all stages in the design life
cycle
Goals of Evaluation
• Assess extent of system functionality

• Assess effect of interface on user

• Identify specific problems


Evaluating Designs

Cognitive Walkthrough
Heuristic Evaluation
Review-based evaluation
Cognitive Walkthrough

Proposed by Polson et al.


• evaluates design on how well it supports user in learning
task
• usually performed by expert in cognitive psychology
• expert ‘walks though’ design to identify potential problems
using psychological principles
• forms used to guide analysis
Cognitive Walkthrough (ctd)

• For each task walkthrough considers


• what impact will interaction have on user?
• what cognitive processes are required?
• what learning problems may occur?

• Analysis focuses on goals and knowledge: does


the design lead the user to generate the correct
goals?
Heuristic Evaluation
• Proposed by Nielsen and Molich.

• usability criteria (heuristics) are identified


• design examined by experts to see if these are violated

• Example heuristics
• system behaviour is predictable
• system behaviour is consistent
• feedback is provided

• Heuristic evaluation `debugs' design.


Review-based evaluation
• Results from the literature used to support or refute parts of
design.

• Care needed to ensure results are transferable to new design.

• Model-based evaluation

• Cognitive models used to filter design options


e.g. GOMS prediction of user performance.

• Design rationale can also provide useful evaluation


information
Evaluating through user
Participation
Laboratory studies
• Advantages:
• specialist equipment available
• uninterrupted environment

• Disadvantages:
• lack of context
• difficult to observe several users cooperating

• Appropriate
• if system location is dangerous or impractical for constrained single user
systems to allow controlled manipulation of use
Field Studies
• Advantages:
• natural environment
• context retained (though observation may alter it)
• longitudinal studies possible

• Disadvantages:
• distractions
• noise

• Appropriate
• where context is crucial for longitudinal studies
Evaluating Implementations

Requires an artefact:
simulation, prototype, full
implementation
Experimental evaluation

• controlled evaluation of specific aspects of interactive behaviour

• evaluator chooses hypothesis to be tested

• a number of experimental conditions are considered which differ


only in the value of some controlled variable.

• changes in behavioural measure are attributed to different


conditions
Experimental factors
• Subjects
• who – representative, sufficient sample
• Variables
• things to modify and measure
• Hypothesis
• what you’d like to show
• Experimental design
• how you are going to do it
Variables

• Independent Variable (IV)


characteristic changed to produce different
conditions
e.g. interface style, number of menu items

• Dependent Variable (DV)


characteristics measured in the experiment
e.g. time taken, number of errors.
Hypothesis
• Prediction of outcome
• framed in terms of IV and DV

e.g. “error rate will increase as font size decreases”

• Null hypothesis:
• states no difference between conditions
• aim is to disprove this

e.g. null hyp. = “no change with font size”


Experimental design
• Within groups design
• each subject performs experiment under each
condition.
• transfer of learning possible
• less costly and less likely to suffer from user variation.
• Between groups design
• each subject performs under only one condition
• no transfer of learning
• more users required
• variation can bias results.
Analysis of data
• Before you start to do any statistics:
• look at data
• save original data

• Choice of statistical technique depends on


• type of data
• information required

• Type of data
• discrete - finite number of values
• continuous - any value
Analysis - types of test
• Parametric
• assume normal distribution
• robust
• powerful

• Non-parametric
• do not assume normal distribution
• less powerful
• more reliable

• Contingency Table
• classify data by discrete attributes
• count number of data items in each group
Analysis of data (cont.)
• What information is required?
• is there a difference?
• how big is the difference?
• how accurate is the estimate?

• Parametric and non-parametric tests mainly address


first of these
Experimental studies on groups

More difficult than single-user experiments

Problems with:
• subject groups
• choice of task
• data gathering
• analysis
Subject groups

• Larger number of subjects.  More expensive

• Longer time to `settle down’. … Even more variation!

• Difficult to timetable so … often only three or four groups


Data gathering
several video cameras
+ direct logging of application

problems:
• synchronisation
• sheer volume!

one solution:
• record from each perspective
Analysis
N.B. vast variation between groups

solutions:
• within groups experiments
• micro-analysis (e.g., gaps in speech)
• anecdotal and qualitative analysis

look at interactions between group and media

controlled experiments may `waste' resources!


Field studies
Experiments dominated by group formation

Field studies more realistic:


distributed cognition  work studied in context
real action is situated action
physical and social environment both crucial

Contrast:
psychology – controlled experiment
sociology and anthropology – open study and rich data
Observational Methods

Think Aloud
Cooperative evaluation
Protocol analysis
Automated analysis
Post-task walkthroughs
Think Aloud
• user observed performing task
• user asked to describe what he is doing and why, what he thinks
is happening etc.

• Advantages
• simplicity - requires little expertise
• can provide useful insight
• can show how system is actually use
• Disadvantages
• subjective
• selective
• act of describing may alter task performance
Cooperative evaluation
• variation on think aloud
• user collaborates in evaluation
• both user and evaluator can ask each other questions throughout

• Additional advantages
• less constrained and easier to use
• user is encouraged to criticize system
• clarification possible
Protocol analysis
• paper and pencil – cheap, limited to writing speed
• audio – good for think aloud, difficult to match with other protocols
• video – accurate and realistic, needs special equipment, obtrusive
• computer logging – automatic and unobtrusive, large amounts of data difficult to
analyze
• user notebooks – coarse and subjective, useful insights, good for longitudinal studies

• Mixed use in practice.


• audio/video transcription difficult and requires skill.
• Some automatic support tools available
Automated Analysis – Economic
Value Added (EVA)
• Workplace project
• Post task walkthrough
• user reacts on action after the event
• used to fill in intention
• Advantages
• analyst has time to focus on relevant incidents
• avoid excessive interruption of task
• Disadvantages
• lack of freshness
• may be post-hoc interpretation of events
Post-task Walkthroughs

• transcript played back to participant for comment


• immediately  fresh in mind
• delayed  evaluator has time to identify questions

• useful to identify reasons for actions and alternatives


considered

• necessary in cases where think aloud is not possible


Query Techniques

Interviews
Questionnaires
Interviews
• analyst questions user on one-to -one basis
usually based on prepared questions
• informal, subjective and relatively cheap

• Advantages
• can be varied to suit context
• issues can be explored more fully
• can elicit user views and identify unanticipated problems
• Disadvantages
• very subjective
• time consuming
Questionnaires
• Set of fixed questions given to users

• Advantages
• quick and reaches large user group
• can be analyzed more rigorously
• Disadvantages
• less flexible
• less probing
Questionnaires (ctd)
• Need careful design
• what information is required?
• how are answers to be analyzed?

• Styles of question
• general
• open-ended
• scalar
• multi-choice
• ranked
Physiological methods

Eye tracking
Physiological measurement
Eye tracking
• head or desk mounted equipment tracks the position of the eye

• eye movement reflects the amount of cognitive processing a display


requires

• measurements include
• fixations: eye maintains stable position. Number and duration indicate level of
difficulty with display
• saccades: rapid eye movement from one point of interest to another

• scan paths: moving straight to a target with a short fixation at the target is
optimal
Physiological measurements
• emotional response linked to physical changes
• these may help determine a user’s reaction to an
interface
• measurements include:
• heart activity, including blood pressure, volume and pulse.
• activity of sweat glands: Galvanic Skin Response (GSR)
• electrical activity in muscle: electromyogram (EMG)
• electrical activity in brain: electroencephalogram (EEG)
• some difficulty in interpreting these physiological
responses - more research needed
Choosing an Evaluation Method
when in process: design vs. implementation
style of evaluation: laboratory vs. field
how objective: subjective vs. objective
type of measures: qualitative vs. quantitative
level of information: high level vs. low level
level of interference: obtrusive vs. unobtrusive
resources available: time, subjects, equipment,
expertise

You might also like