0% found this document useful (0 votes)
13 views13 pages

Evaluation in Human

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views13 pages

Evaluation in Human

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Topics: Evaluation in Human-Computer Interaction (HCI)

Definition:

Evaluation in HCI is the process of measuring how well a computer system or


application works for the user. It includes assessing usability, efficiency, and user
satisfaction. Data is collected through methods such as testing and surveys to
improve the design and meet user needs.

Note:

It is important to note that evaluation in HCI is an ongoing process, and it is essential


to involve users throughout the design and development process. This approach
ensures that the technology is meeting the needs of its intended users and can lead
to more successful and usable products.

In conclusion, evaluation in HCI is a critical step in the design and development of


any technology intended for human use. Various methods and techniques can be
used to assess the effectiveness, usability, and satisfaction of a system, including
usability testing, user satisfaction surveys, and field studies. It is important to involve
users throughout the design and development process to ensure that the technology
meets their needs and leads to more successful and usable products.

Why Evaluation is Important in HCI?

 Evaluation in HCI provides valuable insights into how users interact with a
computer system or application and how well the system meets their needs
 Without evaluation, it is difficult to know if a system is truly usable, efficient,
and satisfying to users
 Evaluation helps to identify problems early on in the design process, when
they are easier and less costly to fix
 It helps to ensure that the final product is of high quality and meets the
needs of the users
 HCI evaluations can provide feedback that can help improve the overall user
experience, increase efficiency and productivity, and reduce frustration and
errors.

Example of Applying Evaluation in HCI:

An example of an evaluation in HCI is when a group of users test a new mobile


application, researchers observe and take notes of any difficulties they encounter,
how long it takes them to complete each task, and any feedback they provide. The
researchers would use this data to identify what needs to be improved in the
application, such as navigation or feedback and the design team would make
changes and conduct further testing to ensure that the usability issues have been
addressed.
What are the different types of evaluation?

There are several different types of evaluation that can be used in Human-Computer
Interaction (HCI) research and design, including:

1. Usability evaluation: This type of evaluation focuses on the usability of a system,


including ease of use, learnability, and user satisfaction. Usability testing is a
common method used in usability evaluation.
2. Functional evaluation: This type of evaluation assesses the functionality of a
system, including its ability to perform its intended tasks and meet its design
requirements.
3. User acceptance evaluation: This type of evaluation assesses how well a system
is accepted by its intended users, including factors such as perceived usefulness
and user satisfaction.
4. Accessibility evaluation: This type of evaluation assesses how well a system can
be used by people with disabilities, including factors such as compatibility with
assistive technologies and compliance with accessibility guidelines.
5. Performance evaluation: This type of evaluation assesses the system's efficiency
and effectiveness, including factors such as speed, accuracy, and scalability.
6. Impact evaluation: This type of evaluation assesses the broader impact of a
system, including factors such as its effect on users' productivity, quality of life,
and overall well-being.
7. A/B testing: A/B testing is a method of comparing two versions of a system, in
order to understand which, one is more effective.
8. Heuristic evaluation: Heuristic evaluation is a method that uses a set of
established usability principles (heuristics) to evaluate the usability of a user
interface.
9. Cognitive walkthrough: This method, is a usability inspection method to
evaluate the learnability of the interface by tracing the actions of a hypothetical
user and identifying potential problem areas.

Note:

It is important to note that different types of evaluation can be used in different


stages of the design and development process and will depend on the goals and
objectives of the research and/or product development.
Example of the Different Types of Evaluation

1. Usability evaluation: A team of researchers conducts a usability testing session


in a lab by having participants perform a set of tasks on a new e-commerce
website. They measure the participants' performance, including task completion
time and error rate, as well as their satisfaction with the website.

2. Functional evaluation: A team of engineers conducts functional testing on a new


smart home automation system to ensure that it can connect to all the devices in
a home and perform its intended tasks, such as controlling the temperature and
lighting.

3. User acceptance evaluation: A team of researchers administers a survey to users


of a new ride-sharing app to gather feedback on the app's perceived usefulness,
ease of use, and overall satisfaction.

4. Accessibility evaluation: A team of designers and developers conducts an


accessibility evaluation on a new mobile app to ensure that it is compatible with
assistive technologies and compliant with accessibility guidelines for people with
disabilities.

5. Performance evaluation: A team of engineers’ conduct performance testing on a


new online platform to measure its response time, ability to handle large
amounts of data, and overall scalability.

6. Impact evaluation: A team of researchers conduct a study to measure the impact


of a new virtual reality system on users' mental health and overall well-being.

7. A/B testing: A team of marketers runs an A/B test on a website by showing half
of the visitors a version with a red "buy" button and the other half a version with
a blue "buy" button. They then measure the conversion rate for each version to
determine which color of button is more effective.

8. Heuristic evaluation: A team of usability experts conduct a Heuristic evaluation


by inspecting a new website against a set of established usability principles
(heuristics) such as consistency, visibility of system status, and error prevention.

9. Cognitive Walkthrough: A team of usability experts conduct a cognitive


walkthrough by simulating the thought process of a user trying to complete a
task on a new mobile app and identifying potential problem areas, such as lack of
clear instructions or confusing navigation.
Application of Evaluation in Real Scenario

Problem:

Social Media Application

A new social media platform for sharing and commenting on news articles is being
developed, but the development team is unsure if the platform is meeting the needs
of its intended users.

Solution: To evaluate the effectiveness and usability of the new social media
platform, the development team could use a combination of different types of
evaluation methods:

1. Usability evaluation: The team could conduct usability testing by having a group
of users perform a set of tasks on the platform, such as sharing an article, leaving
a comment, and searching for a specific topic. The team could measure the users'
performance, including task completion time and error rate, as well as their
satisfaction with the platform.
2. Functional evaluation: The team could conduct functional testing on the
platform to ensure that it is able to perform its intended tasks, such as posting,
commenting and searching articles.
3. User acceptance evaluation: The team could administer a survey to users of the
platform to gather feedback on the platform's perceived usefulness, ease of use,
and overall satisfaction.
4. Accessibility evaluation: The team could conduct an accessibility evaluation on
the platform to ensure that it is compatible with assistive technologies and
compliant with accessibility guidelines for people with disabilities.
5. Performance evaluation: The team could conduct performance testing on the
platform to measure its response time, ability to handle large amounts of data,
and overall scalability.
6. Impact evaluation: The team could conduct a study to measure the impact of the
platform on users' knowledge and overall well-being
7. A/B testing: The team could conduct an A/B test on the platform by showing
different versions of the interface to different users, and then measuring which
version is more effective in terms of user engagement and satisfaction.
8. Heuristic evaluation: The team could conduct a Heuristic evaluation by
inspecting the platform against a set of established usability principles
(heuristics) such as consistency, visibility of system status, and error prevention.
9. Cognitive Walkthrough: The team could conduct a cognitive walkthrough by
simulating the thought process of a user trying to complete a task on the
platform and identifying potential problem areas, such as lack of clear
instructions or confusing navigation.

By using a combination of these different types of evaluation, the development team


can get a comprehensive understanding of how well the platform is meeting the
needs of its intended users and identify areas for improvement.
how about an computer rental services website

Problem:

Computer Rental Services Website

A new computer rental services website is being developed, but the development
team is unsure if the website is meeting the needs of its intended users.

Solution: To evaluate the effectiveness and usability of the new computer rental
services website, the development team could use a combination of different types
of evaluation methods:

1. Usability evaluation: The team could conduct usability testing by having a group
of users perform a set of tasks on the website, such as searching for a specific
computer, reserving a rental, and completing the checkout process. The team
could measure the users' performance, including task completion time and error
rate, as well as their satisfaction with the website.
2. Functional evaluation: The team could conduct functional testing on the website
to ensure that all the features are working as intended, such as the rental
reservation system, the checkout process, the payment gateway, and the
inventory management system.
3. User acceptance evaluation: The team could administer a survey to users of the
website to gather feedback on the website's perceived usefulness, ease of use,
and overall satisfaction.
4. Accessibility evaluation: The team could conduct an accessibility evaluation on
the website to ensure that it is compatible with assistive technologies and
compliant with accessibility guidelines for people with disabilities.
5. Performance evaluation: The team could conduct performance testing on the
website to measure its response time, ability to handle large amounts of data,
and overall scalability.
6. Impact evaluation: The team could conduct a study to measure the impact of the
website on users' productivity and overall satisfaction.
7. A/B testing: The team could conduct an A/B test on the website by showing
different versions of the interface to different users, and then measuring which
version is more effective in terms of user engagement and satisfaction.
8. Heuristic evaluation: The team could conduct a Heuristic evaluation by
inspecting the website against a set of established usability principles (heuristics)
such as consistency, visibility of system status, and error prevention.
9. Cognitive Walkthrough: The team could conduct a cognitive walkthrough by
simulating the thought process of a user trying to complete a task on the website
and identifying potential problem areas, such as lack of clear instructions or
confusing navigation.

By using a combination of these different types of evaluation, the development team


can get a comprehensive understanding of how well the website is meeting the
needs of its intended users and identify areas for improvement.
Additional Information:

Heuristic evaluation is a method used to evaluate the usability of a computer system


or application. It involves having a small group of experts (called evaluators) examine
the system and identify any usability issues by comparing it to a set of established
usability principles (called heuristics).

The Scenario:

Imagine you are designing a new recipe website, and you want to make sure that it's
easy for users to find and use the recipes they want. In order to do this, you might
conduct a heuristic evaluation. The evaluators would be experts in the field of
website design and usability, and they would examine the website to identify any
usability issues. They would use a set of established usability principles, such as
"Consistency and Standards" or "Error Prevention," to guide their evaluation. They
would then provide feedback on what they found, pointing out areas where the
website could be improved. For example, the evaluators might suggest that the
website's search function could be improved, or that the recipe categories could be
more clearly labeled.

Heuristic evaluation is a quick and cost-effective method for identifying usability


issues, and it can be done early in the design process, making it easier and less costly
to fix any problems that are found. It's not a substitute for user testing, but it can be
a good way to get a sense of how usable a system is before testing it with actual
users.

In summary, Heuristic evaluation is a method that uses experts to evaluate the


usability of a computer system or application by comparing it to a set of established
usability principles, this method is quick and cost-effective, can be done early in the
design process and can help identify usability issues.

What is evaluation framework DECIDE?

 DECIDE is an evaluation framework that stands for Design, Evaluate, Implement,


Disseminate, and Evaluate again.
 It is used to guide the evaluation process in Human-Computer Interaction (HCI)
and other fields.
 The significance of the DECIDE framework is that it provides a structured
approach to evaluating a computer system or application, ensuring that all
important aspects of the evaluation process are considered.
The acronym DECIDE stands for:

1. Design: The first step is to design the evaluation plan, which includes identifying
the goals and objectives of the evaluation, selecting appropriate methods, and
developing a data collection plan.
2. Evaluate: This step involves collecting and analyzing data, interpreting the results
and drawing conclusions about the system or application being evaluated.
3. Implement: Based on the evaluation results, appropriate changes are made to
the system or application to improve its usability, efficiency, and overall
satisfaction for the users.
4. Disseminate: This step involves sharing the results of the evaluation with
stakeholders such as the development team, users, and other relevant parties.
5. Evaluate again: The final step is to re-evaluate the system or application after the
improvements have been made to ensure that the changes have been effective
and that the system continues to meet the needs of the users.

Using the DECIDE framework ensures that the evaluation process is systematic and
comprehensive, and that all important aspects of the evaluation process are considered,
making the evaluation more reliable and trustworthy.

Why it is important?

It is important because it provides a structured approach to evaluating a computer


system or application, ensuring that all important aspects of the evaluation process are
considered.

Using the DECIDE framework ensures that the evaluation process is systematic and
comprehensive, and that all important aspects of the evaluation process are considered,
making the evaluation more reliable and trustworthy.

It helps to ensure that evaluation results are communicated effectively to stakeholders,


and that the improvements are implemented and evaluated again, providing a feedback
loop that helps to ensure that the system or application continues to meet the needs of
the users. Overall, the DECIDE framework is important because it helps to ensure that the
evaluation process is systematic, comprehensive and effective, making it an essential tool
for improving the usability, efficiency, and satisfaction of computer systems and
applications for the users.

Example of Applying the DECIDE Framework in Real Scenario

New e-commerce Website.

1. Design: The evaluation plan is designed with the following goals and objective

 To assess the usability of the website'


 To assess the usability of the website's interface
 To measure the efficiency of completing transactions on the website,
 To gather feedback on the overall user experience.
 A user testing methodology is chosen and a data collection plan is developed,
including recruiting a diverse group of users who represent the target user
group of the website, and having them perform a series of tasks such as
browsing, searching, and completing a purchase.

2. Evaluate: The data is collected by observing the users while they perform the tasks
and recording any difficulties they encounter, how long it takes them to complete
each task, and any feedback they provide. The data is analyzed and conclusions are
drawn about the website's usability, efficiency, and user satisfaction.
3. Implement: Based on the evaluation results, the design team makes changes to the
website, such as simplifying the navigation, adding clear calls to action, and improving
the checkout process.
4. Disseminate: The results of the evaluation are shared with stakeholders such as the
development team, users, and other relevant parties.
5. Evaluate again: The website is re-evaluated after the improvements have been made
by conducting a follow-up user testing to ensure that the changes have been effective
and that the website continues to meet the needs of the users.

A Tourist Website

1. Design: The evaluation plan is designed with the following goals and objectives:

 To assess the usability of the website's interface,

 To measure the efficiency of finding and booking travel itineraries,

 To gather feedback on the overall user experience.

 A user testing methodology is chosen and a data collection plan is developed,


including recruiting a group of users who represent the target user group of
the website, and having them perform a series of tasks such as searching for
destination, browsing travel itineraries, and booking a trip.

2. Evaluate: The data is collected by observing the users while they perform the tasks
and recording any difficulties they encounter, how long it takes them to complete
each task, and any feedback they provide. The data is analyzed and conclusions are
drawn about the website's usability, efficiency, and user satisfaction.

3. Implement: Based on the evaluation results, the design team makes changes to the
website, such as simplifying the navigation, adding filters and sorting options, and
improving the booking process.
4. Disseminate: The results of the evaluation are shared with stakeholders such as the
development team, users, and other relevant parties.
5. Evaluate again: The website is re-evaluated after the improvements have been made
by conducting a follow-up user testing to ensure that the changes have been
effective and that the website continues to meet the needs of the users.

By following the DECIDE framework, the evaluation process is systematic and


comprehensive, and all important aspects of the evaluation process are considered,
ensuring that the tourist website is designed to be usable, efficient, and satisfying to the
users in terms of searching and booking travel itineraries.

窗体顶端

Practical Issues:

1. Recruiting participants: Finding a representative sample of participants can be


difficult, and recruiting participants who are willing to participate in evaluation
studies can be challenging.
2. Time and cost: Evaluation studies can be time-consuming and expensive, which can
be a barrier to conducting thorough evaluations.
3. Data collection and analysis: Collecting and analyzing data from evaluation studies
can be a complex and challenging task.
4. Generalizing results: Results from evaluation studies may not be generalizable to the
entire population of users, which can limit the applicability of the findings.

Ethical issues:

1. Informed consent: Participants must be informed of the purpose and procedures of


the evaluation study and give their informed consent to participate.
2. Privacy and confidentiality: Participants' personal information and data must be
protected and kept confidential.
3. Deception: Participants must not be deceived about the purpose or nature of the
evaluation study.
4. Risk of harm: Evaluation studies should not cause physical or psychological harm to
participants.
5. Fairness: The evaluation study should be designed to be fair and not discriminate
against any particular group of users.
It is important for the practitioners to consider these issues when planning,
conducting and reporting evaluation studies. They should follow the ethical
guidelines provided by professional organizations such as the ACM or IEEE.

Example:

窗体底端
A problem: A team of HCI researchers wants to conduct an evaluation study of a new
virtual reality (VR) system, but they are facing practical and ethical issues.

Solution:

Practical issues:

 Recruiting participants: The team reaches out to local universities and research
groups to recruit a diverse group of participants who are willing to participate in
the study. They also offer incentives such as payment or course credit for
participation.
 Time and cost: To minimize time and cost, the team carefully plans and designs
the study, selecting the most appropriate methods for data collection and
analysis.

 Data collection and analysis: The team trains themselves in the appropriate
methods for data collection and analysis, such as user testing and usability
testing, to ensure that the data is accurate and reliable.

 Generalizing results: The team acknowledges that the results may not be
generalizable to the entire population of VR users, and they report the
limitations of the study in their findings.

Ethical issues:

 Informed consent: The team obtains informed consent from all participants,
informing them of the purpose, procedures, and potential risks of the study.
 Privacy and confidentiality: The team protects participants' personal information
and data by keeping it confidential and not sharing it with any third parties.
 Deception: The team ensures that participants are not deceived about the
purpose or nature of the study by providing them with accurate and honest
information.
 Risk of harm: The team assesses the potential risks of the study and takes
appropriate measures to minimize them, such as providing participants with a
way to exit the VR environment if they feel uncomfortable.
 Fairness: The team designs the study to be fair and not discriminate against any
particular group of users, such as those with disabilities or different cultural
backgrounds.

This solution shows how the team of HCI researchers can address the practical and
ethical issues involved in evaluating a new VR system, by carefully planning and
designing the study, recruiting a diverse group of participants, obtaining informed
consent and protecting their privacy, ensuring the accuracy and honesty of the
information provided, assessing and minimizing the potential risks and designing the
study to be fair to all participants.
窗体顶端
Regenerate response
窗体底端

Tourist Website

A problem: A team of HCI researchers wants to conduct an evaluation study of a new


tourist website, but they are facing practical and ethical issues.

Solution:

Practical issues:

 Recruiting participants: The team reaches out to local tourist agencies and travel
groups to recruit a diverse group of participants who are willing to participate in
the study. They also offer incentives such as discounts for future travel or gift
cards for participation.
 Time and cost: To minimize time and cost, the team carefully plans and designs
the study, selecting the most appropriate methods for data collection and
analysis.
 Data collection and analysis: The team trains themselves in the appropriate
methods for data collection and analysis, such as user testing and usability
testing, to ensure that the data is accurate and reliable.
 Generalizing results: The team acknowledges that the results may not be
generalizable to the entire population of tourist website users, and they report
the limitations of the study in their findings.

Ethical issues:

 Informed consent: The team obtains informed consent from all participants,
informing them of the purpose, procedures, and potential risks of the study.
 Privacy and confidentiality: The team protects participants' personal information
and data by keeping it confidential and not sharing it with any third parties.
 Deception: The team ensures that participants are not deceived about the
purpose or nature of the study by providing them with accurate and honest
information.
 Risk of harm: The team assesses the potential risks of the study and takes
appropriate measures to minimize them, such as providing participants with a
way to exit the study if they feel uncomfortable.
 Fairness: The team designs the study to be fair and not discriminate against any
particular group of users, such as those with disabilities or different cultural
backgrounds.

This solution shows how the team of HCI researchers can address the practical and
ethical issues involved in evaluating a new tourist website, by carefully planning and
designing the study, recruiting a diverse group of participants, obtaining informed
consent and protecting their privacy, ensuring the accuracy and honesty of the
information provided, assessing and minimizing the potential risks and designing the
study to be fair to all participants.

Summary of the Topics Discussed

1. Evaluation in HCI is the process of assessing the effectiveness, efficiency, and


satisfaction of a user interface or system.
2. Different methods can be used for evaluation such as user testing, usability
testing, and expert evaluations.
3. DECIDE framework is a structured approach for evaluating and improving HCI
systems, it consists of 5 steps: Design, Evaluate, Iterate, Deploy, and
Continuously Evaluate.
4. Practical and ethical issues are involved in evaluation studies such as
recruiting participants, time and cost, data collection and analysis,
generalizing results, informed consent, privacy and confidentiality, deception,
risk of harm, and fairness.
5. These issues must be considered when planning, conducting and reporting
the evaluation studies.

Additional Topics:

Heuristic evaluation is a method used to evaluate the usability of a computer system


or application. It involves having a small group of experts (called evaluators) examine
the system and identify any usability issues by comparing it to a set of established
usability principles (called heuristics).

Imagine you are designing a new recipe website, and you want to make sure that it's
easy for users to find and use the recipes they want. In order to do this, you might
conduct a heuristic evaluation. The evaluators would be experts in the field of
website design and usability, and they would examine the website to identify any
usability issues. They would use a set of established usability principles, such as
"Consistency and Standards" or "Error Prevention," to guide their evaluation. They
would then provide feedback on what they found, pointing out areas where the
website could be improved. For example, the evaluators might suggest that the
website's search function could be improved, or that the recipe categories could be
more clearly labeled.

Heuristic evaluation is a quick and cost-effective method for identifying usability


issues, and it can be done early in the design process, making it easier and less costly
to fix any problems that are found. It's not a substitute for user testing, but it can be
a good way to get a sense of how usable a system is before testing it with actual
users.

In summary, Heuristic evaluation is a method that uses experts to evaluate the


usability of a computer system or application by comparing it to a set of established
usability principles, this method is quick and cost-effective, can be done early in the
design process and can help identify usability issues.

You might also like