0% found this document useful (0 votes)
17 views

MODULE3

user interfcae module 3 extra notes

Uploaded by

Adilyt ytt
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

MODULE3

user interfcae module 3 extra notes

Uploaded by

Adilyt ytt
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

MODULE-3

Briefly explain the variety of expert reviews methods in


evaluating multiple design. 10m

Heuristic Evaluation
 Experts review the interface against a set of design
principles or heuristics (e.g., Nielsen's heuristics or the
Eight Golden Rules).
 Effective when experts are familiar with applying
heuristics specific to the interface type, such as gaming or
mobile applications.
Guidelines Review
 Experts assess the design for adherence to a set of
organizational or industry guidelines.
 Often lengthy, as large guidelines documents may contain
hundreds of criteria.
Consistency Inspection
 Experts check for uniformity in terminology, layout, color
schemes, fonts, and formats across different parts of the
interface and documentation.
 Helps ensure that users experience a cohesive interface
across all elements.
Cognitive Walkthrough
 Experts simulate user actions to complete typical tasks
within the interface.
 Effective for interfaces where users learn through
exploratory browsing, but also valuable for trained
interfaces.
Metaphors of Human Thinking (MOT)
 Experts assess the interface based on metaphors related
to human cognition, such as habits, awareness, and
thought processes.
 Useful for evaluating complex interfaces and identifying
deeper, often cognitive, usability issues.
Formal Usability Inspection
 Experts hold a structured review meeting, sometimes in a
courtroom-style setup, to discuss interface strengths and
weaknesses.
 Can be educational but requires more preparation and
resources than other methods.

Explain the methods used in the evaluation during


active use. 10m

evaluating interfaces during active use:


1. Interviews and Focus Groups:
o Individual interviews provide in-depth insights into
user-specific concerns and preferences.
o Focus groups validate common themes in user
feedback, highlighting universal issues and
suggestions.
o Limited to a small user sample due to time and cost
constraints.
o Useful for gathering feedback that aligns with real
user needs, aiding in meaningful interface
improvements.
2. Continuous User-Performance Data Logging:
o Logs capture user actions, including command use,
error rates, and requests for help.
o Identifies common issues (e.g., frequent error
messages) that may need attention.
o Tracks the frequency of feature use, indicating areas
for redesign or simplification.
o Helps optimize performance, predict hardware needs,
and manage system costs effectively.
3. Online Consultants and Suggestion Boxes:
o Access to support via phone, email, or chat reassures
users and helps resolve issues in real-time.
o Consultants provide valuable feedback on recurring
problems and common user queries.
o Suggestion boxes or bug-reporting tools encourage
feedback on interface issues.
o Improves user experience by making the service
more responsive to user needs.
4. Discussion Groups, Wikis, and Newsgroups:
o These platforms enable users to ask questions, share
experiences, and provide feedback anytime.
o Help identify frequent questions and usability issues
that may not appear in formal feedback.
o Encourage community support and user
engagement, contributing to a sense of community.
o Provide insights for designers to improve
documentation and support resources.
5. Automated Evaluation Tools:
o Software tools analyze layout, check design
consistency, and validate compliance with web
standards.
o Help identify design elements that may affect
usability, such as alignment or menu depth.
o Aid in testing accessibility and identifying usability
improvements quickly.
o Support interface designers by streamlining
evaluation tasks, ensuring a more consistent user
experience.

Explain the various types of usability testing with


example 10m
Here’s a summary of the various types of usability testing with
examples:
1. Paper Mockups and Prototyping:
o Purpose: Used in early stages to assess user
reactions to interface layout, wording, and
sequencing.
o Example: A test administrator simulates the
interface using paper representations of screens,
flipping pages as the user navigates a task.
o Benefits: Quick, low-cost, and encourages open
feedback due to low fidelity.
2. Discount Usability Testing:
o Purpose: Provides rapid, low-cost feedback on
usability issues with a small sample (3-6
participants).
o Example: Small-scale testing of a new feature in an
app, focusing on identifying key usability issues with
minimal participants.
o Benefits: Effective for formative evaluations to guide
design revisions; suited for early development.
3. Competitive Usability Testing:
o Purpose: Compares usability of the interface with
competitor products or earlier versions.
o Example: Testing a new e-commerce website against
a competitor’s platform to measure task completion
time.
o Benefits: Highlights strengths and weaknesses
relative to other solutions, helping refine product
positioning.
4. Universal Usability Testing:
o Purpose: Ensures the interface works for diverse
user groups and environments.
o Example: Testing a web application on various
devices, browsers, and network speeds for
compatibility.
o Benefits: Increases accessibility for users with
different abilities, hardware, or internet speeds.
5. Field Tests and Portable Labs:
o Purpose: Observes interface usage in real-world or
naturalistic environments.
o Example: Setting up a portable usability lab in a
retail store to test a new POS (point of sale) system
with employees and customers.
o Benefits: Provides realistic feedback, capturing
challenges that may not arise in a controlled lab.
6. Remote Usability Testing:
o Purpose: Allows users to test the interface from their
own locations, making it feasible to involve
participants globally.
o Example: Using software like WebEx™ for
synchronous testing, where participants perform
tasks remotely while an evaluator observes.
o Benefits: Scalable, cost-effective, and includes users
who otherwise couldn’t participate in-person.
7. Can-You-Break-This Testing:
o Purpose: Challenges users to find flaws or break the
system, often used for robustness testing.
o Example: Giving experienced users beta versions of
a video game to identify critical bugs before launch.
o Benefits: Reduces risk of critical failures post-launch,
fostering goodwill with users by preventing early
breakdowns.
Each method provides unique insights into user experience,
from initial impressions to long-term functionality and
robustness, creating a well-rounded approach to usability
testing.

Describe steps involved for acceptance tests with an


example.10m

Steps Involved in Acceptance Testing


Acceptance testing is a critical phase in the software
development lifecycle where the product is evaluated against
predefined acceptance criteria. The goal is to confirm that the
product meets the specified requirements before it is delivered
to the customer or released. Below are the steps involved in
conducting acceptance tests, with an example:
1. Define Acceptance Criteria
o Purpose: Establish clear, measurable, and objective
criteria that the product must meet.
o Example: For a food-shopping website, the
acceptance criteria may include:
 The ability to complete specific tasks (e.g.,
shopping cart checkout) within a set time frame
(e.g., 30 minutes).
 The system must be error-free or meet an
acceptable error threshold (e.g., less than 5%
user errors).
2. Identify the Participants
o Purpose: Select the target user group who will
perform the acceptance tests.
o Example: For the food-shopping site, participants
could include:
 35 adults (ages 25-45) with moderate web
usage experience.
 Special groups such as:
 10 older adults (ages 55-65).
 10 adults with disabilities (visual, auditory,
or motor impairments).
 10 recent immigrants with English as a
second language.
3. Design Test Scenarios and Tasks
o Purpose: Develop test cases and tasks that simulate
real-world usage based on the acceptance criteria.
o Example: A benchmark task could be: "Complete a
food order and checkout within 30 minutes." The task
must be realistic and aligned with the core
functionalities of the website.
4. Pilot Testing
o Purpose: Run initial tests with a small group of users
to refine the test scenarios, materials, and
procedures before the actual acceptance testing.
o Example: Conduct a trial run with a few participants
to ensure the tasks are understandable and feasible
within the time limits. Refine any ambiguous
instructions or technical issues.
5. Conduct Acceptance Testing
o Purpose: Execute the tests with the full set of
participants and measure the system's performance
based on the established criteria.
o Example:
 Participants are asked to complete specific tasks
(e.g., selecting and purchasing groceries).
 Results are recorded to ensure the tasks are
completed within the required time and with
minimal errors.
6. Evaluate Test Results
o Purpose: Assess the outcomes of the acceptance
tests to determine whether the system meets the
required criteria.
o Example: If the task completion time is within 30
minutes for at least 30 out of 35 users and the error
rate is acceptable (less than 5%), the product can
pass this acceptance criterion.
7. Post-Test Follow-up
o Purpose: After initial tests, a follow-up may be
conducted to evaluate user retention and ensure the
system is usable after a period of time.
o Example: Participants are contacted a week later to
perform the same tasks again, ensuring they retain
the knowledge and can complete the tasks without
issues. At least 8 out of 10 participants should
successfully complete the tasks within 20 minutes.
8. Acceptance Decision
o Purpose: Based on the results, the decision is made
whether the system meets the specified acceptance
criteria and can be released.
o Example: If the system passes the acceptance tests
(e.g., correct task completion within the required
time, minimal errors, and high user satisfaction), the
product can be deemed ready for final release.
Example of an Acceptance Test Scenario for a Food-
Shopping Website:
 Test Objective: Validate that the food shopping website
meets the functional, usability, and performance criteria.
 Acceptance Criteria:
o Task completion time: At least 30 out of 35
participants must complete checkout within 30
minutes.
o Error rate: Users should make no more than 5%
errors during task completion.
o User retention: After one week, at least 8 out of 10
participants should be able to perform tasks correctly
without additional training.
o Satisfaction: Participants should rate the website’s
usability 4 out of 5 or higher.
By following these steps, acceptance testing ensures that the
software is ready for deployment, meets the user needs, and
adheres to the agreed-upon specifications.
Survey instruments used in Usability Interface Design
(UID) help gather insights into users' experiences,
preferences, and challenges with an interface. These tools
include questionnaires, scales, and survey forms designed to
measure various aspects of usability, and they provide data
that can validate design improvements or guide future
adjustments.
Types of Survey Instruments in UID:
1. Questionnaires: These are structured forms that ask
users about their experiences with specific aspects of an
interface. Examples include:
o Questionnaire for User Interaction Satisfaction
(QUIS): Measures satisfaction across categories like
system usability, usefulness, and quality of the
interface.
o System Usability Scale (SUS): A "quick and dirty"
10-item questionnaire that provides a usability score.
o Post-Study System Usability Questionnaire
(PSSUQ) and Computer System Usability
Questionnaire (CSUQ): IBM-developed tools
focusing on usefulness, quality, and satisfaction.
2. Likert Scales: Used to measure users' agreement or
disagreement on statements, e.g., "strongly agree" to
"strongly disagree." This scale is commonly applied in
questions about effectiveness, ease of use, or satisfaction.
3. Semantic Differential Scales: Bipolar scales that ask
users to rate their experience between two opposite
adjectives (e.g., "simple vs. complex").
4. Specific Metrics and Targeted Questions: Some
instruments measure specific user attributes or behaviors,
such as ease of learning, time to complete tasks, and
emotional responses.
5. Demographic and Behavioral Information: Collects
data on user backgrounds, computer experience, and
motivation. These insights allow designers to segment
users based on relevant characteristics.
6. Pilot Tests: To ensure accuracy and relevance, survey
instruments are tested with a smaller user group before
widespread implementation.
Benefits of UID Surveys
 Large Sample Sizes: Surveying a broad user base
provides statistically significant insights, minimizing the
biases inherent in small usability testing groups.
 Clear Goals: Targeted questions help designers identify
specific issues and measure improvements over time.
 Flexibility and Versatility: Many survey instruments,
like SUS and QUIS, can be tailored for different platforms
and used across various digital products, including web-
based, mobile, and gaming interfaces.
Design and Administration of Surveys
A survey should be pre-tested to avoid ambiguous questions
and ensure clarity. Consideration for the user group, such as
age or language needs, can also enhance the reliability of the
feedback.

You might also like