Human AIInteractionandAIAvatars
Human AIInteractionandAIAvatars
net/publication/375910768
CITATIONS READS
0 250
2 authors, including:
Keng Siau
City University of Hong Kong
562 PUBLICATIONS 13,538 CITATIONS
SEE PROFILE
All content following this page was uploaded by Keng Siau on 02 February 2024.
philosophy, business, and linguistics (Buchanan, 2005; Kumar et al., 2016; Ma & Siau,
2018; Yang & Siau, 2018; Siau & Wang, 2020). AI applications that are familiar to
most range from Apple Siri to Amazon Go, and from self-driving cars to autonomous
weapons. AI can be classified into two main categories -- weak AI and strong AI (Hyder
et al., 2019; Wang & Siau, 2019). Weak AI, also known as narrow AI, are AI applications
that specialize in specific tasks. Most current AI applications, such as Google Assistance,
Alpha Go, pilotless drones, and driverless vehicles, can be considered weak AI. However,
AI researchers from different organizations and nations are competing to create strong
AI (also called human-level artificial general intelligence or artificial superintelligence).
Strong AI applications can process multiple tasks proficiently.
Strong AI is a controversial and contentious concept. The main concern of strong
AI is its ability to challenge and may result in replacing humans. Many transhumanists
believe that strong AI will have self-awareness and is equivalent to human intelligence.
Once strong AI becomes a reality, an intelligence explosion will precipitate, and the
enhancement in intelligence will be exponential. Technological singularity may be the
next logical outcome. In other words, strong AI could outperform humans at nearly every
cognitive task. Originally thought to be impossible or something that would happen in
the distant future, the emergence of ChatGPT has cast doubt on the impossibility of
strong AI (Nah et al., to appear).
AI avatars have great potential to enhance HAII from various aspects. In this study, we
summarize the potential that can be achieved by AI avatar technology from four aspects:
human-like appearance, human-like interaction, customization, and context adaptability.
Human-like images have been widely used in UI design. Previous studies have found
that the anthropomorphism level of agents can positively influence users’ perception (de
Visser et al., 2016) and increase users’ intention to use (Ling et al., 2021). However,
early anthropomorphic agents at that time are static human-like images with a degree
of cartoonishness. As computer graphics, rendering techniques, and 3D modeling tech-
nologies become more and more advanced, AI avatars can be very realistic, and users
can barely tell the avatars from real humans with their naked eyes. AI system designers
can design AI avatars with different genders, ages, facial traits, and body shapes, just like
natural human appearance, to better interact with users. In addition, many developing
technologies have focused on the consistency of avatar mouth shape and speech content,
realistic eye movement, and finer skin texture, which can further improve the realism of
AI avatars.
Human-AI Interaction and AI Avatars 123
ChatGPT, the latest AI chatbot technology and a large-scale natural language process-
ing application, has shown AI systems’ potential to express human-like interaction.
ChatGPT passed the Turing test by making a panel of judges think it was a human. It
can interact with users in natural languages and in friendly conversation. Nevertheless,
interacting in text or audio has its limits. Visual representations like AI Avatars can
take advantage of more human-like interaction features. AI avatars enable the exhibition
of facial expressions, gestures, and body language. By combining visual and auditory
presentations, AI avatars have great potential to exhibit the “emotions” of AI systems
to create a more human-like interaction. As technology advances, different tones and
inflections, facial expressions, gestures, and body language can be added to create a
realistic human-like interaction.
2.3 Customization
User preference is one of the most critical concerns for AI designers. Human-centered
AI systems aim to provide satisfying services based on personalized user preferences
(Alves et al., 2020; Garibay et al., 2023). For instance, voice navigation allows users
to select different sounds. ChatGPT can autonomously learn the users’ preferences for
response styles from previous interactions. Using AI avatar as a UI to enhance voice- and
text-based interaction can give users more opportunities for a personalized experience.
AI designers can adopt the strategy of customization to transfer the power of AI avatar
design to the users. So, users can select or design different AI avatars to communicate
with them or assist their work.
3.1 Trust
Trust is a critical factor influencing user adoption and acceptance of AI systems (Siau &
Shen, 2003; Siau & Wang, 2018; Choung et al., 2022). Users are more likely to use AI
systems and interact with them when they perceive AI systems as trustworthy, reliable,
and competent. Using human-like AI avatars as the UI can positively influence users’
trust in AI systems (Jiang et al., 2023).
First, realistic human-like AI avatars can enhance the emotional connection between
humans and AI systems (Culley & Madhavan, 2013), which may contribute to trust build-
ing. Generative AI, like ChatGPT, has shown its ability to communicate with humans
naturally, mimicking human-like conversations. By additionally incorporating human-
like appearance, expressions, and body language, trust in the AI systems is likely to
increase, enhancing users’ adaptability. For instance, in healthcare, AI assistants can
serve as virtual physicians to engage in empathetic conversations and provide patients
with guided mindfulness exercises (Graham et al., 2019). Also, when interacting with
AI assistants with familiar and friendly appearances and behaviors, patients are more
likely to self-disclose more information, which is helpful for mental disease diagnosis
and treatment.
Second, AI avatars give more possibilities to increase AI credibility through a variety
of UI designs. Designers can customize AI avatars by changing their genders, races, ages,
and other visual attributes. They can also leverage their creativity to develop avatars
with distinct facial features, unique dressing styles, and varied speaking styles. It allows
designers to explore different approaches to improve the trustworthiness of AI systems.
By designing the AI avatar’s visual presentation and interaction style to align with user
preferences and expectations, designers can create a UI that enhances the perceived
credibility and trustworthiness of the AI system.
User satisfaction is a key aspect that influences users’ intention to adopt and use AI
systems (Deng et al., 2010; Dang et al., 2018). To be specific, the hedonic value of AI
applications and systems plays a significant role in user satisfaction (Ben Mimoun &
Poncin, 2015). Both the AI avatar itself and the customization function of AI avatars
have the potential to increase the hedonic value of AI systems. With the support of AI
avatars, AI systems can be more than just functional tools.
First, it is novel and interesting for users to interact with a highly human-like AI. AI
avatars enhance the user experience by bringing a sense of surprise, engagement, and
humor. AI avatars’ entertainment value is further increased by having features such as a
narrative and interactive conversation, which improves the user experience and leaves a
lasting impression.
In addition, AI systems that allow users to customize the AI avatars they interact
with will be more appealing to users and increase the hedonic value. Users might view
this customization function as a gamification component that enhances user control and
playfulness. The ability to customize the appearance and voice of AI avatars and how they
Human-AI Interaction and AI Avatars 125
act and communicate can provide users with a more satisfying and enjoyable interactive
experience.
promote inclusiveness, mitigate biases, and foster a more equitable and respectful social
environment.
5 Conclusions
AI has become increasingly prevalent in our daily lives, acting as a powerful supplement
to human capabilities. In this paper, we focus on the application potential of AI avatars
in HAII. We systematically summarize the potential enabled by AI avatar technology
and discuss how to use AI avatars to enhance HAII. AI avatars have a bright prospect
to promote AI systems development by increasing user adoption and acceptance. We
believe that AI avatars can be a benefit to individuals, organizations, and the whole
society in the future. We also point out the challenges that AI avatars face and the
directions of future research. Not only academic researchers but all parties of society
should work together to promote the development of human-centered AI systems and
AI avatars and reduce the negative impacts of AI avatars.
This research provides both theoretical and practical significance. The research con-
tributes to developing theories that are specific to Human-AI Interaction and testing
existing theories in the context of Human-AI Interaction. The research also contributes
to AI developers and practitioners by providing suggestions and guidelines for the design
of Human-AI Interaction and AI avatars.
References
Alves, T., Natálio, J., Henriques-Calado, J., Gama, S.: Incorporating personality in user interface
design: a review. Pers. Individ. Differ. 155, 109709 (2020)
128 Y. Liu and K. L. Siau
Ben Mimoun, M.S., Poncin, I.: A valued agent: how ECAs affect website customers’ satisfaction
and behaviors. J. Retail. Consum. Serv. 26, 70–82 (2015)
Bevan, N., Carter, J., Harker, S.: ISO 9241-11 revised: what have we learnt about usability since
1998? In: Kurosu, M. (ed.) HCI 2015. LNCS, vol. 9169, pp. 143–151. Springer, Cham (2015).
https://doi.org/10.1007/978-3-319-20901-2_13
Buchanan, B.G.: A (very) brief history of artificial intelligence. AI Mag. 26(4), 53–60 (2005)
Choung, H., David, P., Ross, A.: Trust in AI and its role in the acceptance of AI technologies. Int.
J. Hum.-Comput. Interact. 39(9), 1727–1739 (2022)
Culley, K.E., Madhavan, P.: A note of caution regarding anthropomorphism in HCI agents. Comput.
Hum. Behav. 29, 577–579 (2013)
Daft, R.L., Lengel, R.H.: A proposed integration among organizational information requirements,
media richness and structural design. Manage. Sci. 32, 554–671 (1986)
Dang, M.Y., Zhang, G.Y., Chen, H.: Adoption of social media search systems: an IS success model
perspective. Pac. Asia J. Assoc. Inform. Syst. 10(2), 55–78 (2018)
Davis, A., Murphy, J., Owens, D., Khazanchi, D., Zigurs, I.: Avatars, people, and virtual worlds:
foundations for research in metaverses. J. Assoc. Inf. Syst. 10, 90–117 (2009)
de Visser, E.J., et al.: Almost human: anthropomorphism increases trust resilience in cognitive
agents. J. Exp. Psychol. Appl. 22, 331–349 (2016)
de Visser, E.J., Pak, R., Shaw, T.H.: From “automation” to “autonomy”: the importance of trust
repair in human–machine interaction. Ergonomics 61, 1409–1427 (2018)
Deng, L., Turner, D.E., Gehling, R., Prince, B.: User experience, satisfaction, and continual usage
intention of IT. Eur. J. Inf. Syst. 19, 60–75 (2010)
Duan, Y., Hsieh, T.-S., Wang, R.R., Wang, Z.: Entrepreneurs’ facial trustworthiness, gender, and
crowdfunding success. J. Corp. Finan. 64, 101693 (2020)
Eagly, A.H., Mladinic, A.: Are people prejudiced against women? Some answers from research on
attitudes, gender stereotypes, and judgments of competence. Eur. Rev. Soc. Psychol. 5, 1–35
(1994)
Eschenbrenner, B., Nah, F., Siau, K.: 3-D virtual worlds in education: applications, benefits, issues,
and opportunities. J. Database Manag. 19(4), 91–110 (2008)
Galanxhi, H., Nah, F.F.-H.: Deception in cyberspace: a comparison of text-only vs. avatar-
supported medium. Int. J. Hum.-Comput. Stud. 65(9), 770–783 (2007)
Garibay, O., et al.: Six human-centered artificial intelligence grand challenges. Int. J. Hum.-
Comput. Interact. 39, 391–437 (2023)
Graham, S., Depp, C., Lee, E.E., Nebeker, C., Tu, X., Kim, H.-C.: Jeste, DV: artificial intelligence
for mental health and mental illnesses: an overview. Curr. Psychiatry Rep. 21(11), 116 (2019)
Hyder, Z., Siau, K., Nah, F.: Artificial intelligence, machine learning, and autonomous technologies
in mining industry. J. Database Manag. 30, 67–79 (2019)
Jiang, Y., Yang, X., Zheng, T.: Make chatbots more adaptive: dual pathways linking human-like
cues and tailored response to trust in interactions with chatbots. Comput. Hum. Behav. 138,
107485 (2023)
Kang, S.-H., Gratch, J.: The effect of avatar realism of virtual humans on self-disclosure in anony-
mous social interactions. In: CHI Extended Abstracts on Human Factors in Computing Systems,
pp. 3781–3786 (2010)
Katerattanakul, P., Siau, K.: Creating a virtual store image. Commun. ACM 46(12), 226–232
(2003)
Krishnan, V., Niculescu, M.D., Fredericks, E.: Should I choose this salesperson? Buyer’s emergent
preference in seller from mere exposure. J. Mark. Theory Pract. 27(2), 196–209 (2019)
Kumar, N., Kharkwal, N., Kohli, R., Choudhary, S.: Ethical aspects and future of artificial
intelligence. In: International Conference on Innovation and Challenges in Cyber Security
(ICICCS-INBUSH) (2016)
Human-AI Interaction and AI Avatars 129
Ling, E.C., Tussyadiah, I., Tuomi, A., Stienmetz, J., Ioannou, A.: Factors influencing users’ adop-
tion and use of conversational agents: a systematic review. Psychol. Mark. 38(7), 1031–1051
(2021)
Little, A.C.: Facial appearance and leader choice in different contexts: evidence for task contingent
selection based on implicit and learned face-behaviour/face-ability associations. Leadersh. Q.
25, 865–874 (2014)
Little, A.C., Burriss, R.P., Jones, B.C., Roberts, S.C.: Facial appearance affects voting decisions.
Evol. Hum. Behav. 28, 18–27 (2007)
Ma, Y., Siau, K.: Artificial intelligence impacts on higher education. In: MWAIS 2018 Proceedings,
vol. 42 (2018)
Mori, M., MacDorman, K.F., Kageki, N.: The uncanny valley [from the field]. IEEE Robot. Autom.
Mag. 19(2), 98–100 (2012)
Nah, F., Eschenbrenner, B., DeWester, D., Park, S.: Impact of flow and brand equity in 3D virtual
worlds. J. Database Manag. 21(3), 69–89 (2010)
Nah, F., Eschenbrenner, B., DeWester, D.: Enhancing brand equity through flow and telepresence:
a comparison of 2D and 3D virtual worlds. MIS Q. 35(3), 731–747 (2011)
Nah, F.F.-H., Schiller, S.Z., Mennecke, B.E., Siau, K., Eschenbrenner, B., Sattayanuwat, P.: Collab-
oration in virtual worlds: impact of task complexity on team trust and satisfaction. J. Database
Manag. 28(4), 60–78 (2017)
Nah, F., Zheng, R., Cai, J., Siau, K., Chen, L.: Generative AI and ChatGPT: applications, chal-
lenges, and AI-human collaboration. J. Inform. Technol. Case Appl. Res. 25(3), 277–304
(2023)
Park, S., Nah, F., DeWester, D., Eschenbrenner, B., Jeon, S.: Virtual world affordances: enhancing
brand value. J. Virtual Worlds Res. 1(2), 1–18 (2008)
Schiller, S.Z., Mennecke, B.E., Nah, F.F.-H., Luse, A.: Institutional boundaries and trust of virtual
teams in collaborative design: an experimental study in a virtual world environment. Comput.
Hum. Behav. 35, 565–577 (2014)
Schiller, S., Nah, F., Luse, A., Siau, K.: Men are from Mars and Women are from Venus: Dyadic
collaboration in the metaverse. Internet Res. (to appear)
Shneiderman, B.: Human-centered artificial intelligence: three fresh ideas. AIS Trans. Hum.-
Comput. Interact. 12, 109–124 (2020)
Siau, K.: Education in the age of artificial intelligence: how will technology shape learning? The
Global Analyst 7(3), 22–24 (2018)
Siau, K., et al.: FinTech empowerment: Data science, artificial intelligence, and machine learning.
Cutter Bus. Technol. J. 31(11/12), 12–18 (2018)
Siau, K., Shen, Z.: Building customer trust in mobile commerce. Commun. ACM 46(4), 91–94
(2003)
Siau, K., Wang, W.: Building trust in artificial intelligence, machine learning, and robotics. Cutter
Bus. Technol. J. 31(2), 47–53 (2018)
Siau, K., Wang, W.: Artificial intelligence (AI) ethics: ethics of AI and ethical AI. J. Database
Manag. 31(2), 74–87 (2020)
Stephanidis, C., et al.: Seven HCI grand challenges. Int. J. Hum.-Comput. Interact. 35(14), 1229–
1269 (2019)
Stone, D., Jarrett, C., Woodroffe, M., Minocha, S.: User Interface Design and Evaluation. Elsevier,
San Francisco, California (2005)
Tajfel, H.: Social stereotypes and social groups. In: Hogg, M.A., Abrams, D. (eds.) Intergroup
Relations: Essential Readings, pp. 132–145. Psychology Press (2010)
Wang, D., et al.: From human-human collaboration to Human-AI collaboration: Designing AI
systems that can work together with people. In: Extended Abstracts of the 2020 CHI Conference
on Human Factors in Computing Systems, pp. 1–6 (2020)
130 Y. Liu and K. L. Siau
Wang, W., Siau, K.: Artificial intelligence, machine learning, automation, robotics, future of work
and future of humanity. J. Database Manag. 30, 61–79 (2019)
Wang, Y., Siau, K.L., Wang, L.: Metaverse and human-computer interaction: A technology frame-
work for 3D virtual worlds. In: Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S. (eds.) HCI
International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial
Intelligence: 24th International Conference on Human-Computer Interaction, HCII 2022, Vir-
tual Event, 26 June – 1 July 2022, Proceedings, pp. 213–221. Springer Nature Switzerland,
Cham (2022). https://doi.org/10.1007/978-3-031-21707-4_16
Xu, W., Dainoff, M.J., Ge, L., Gao, Z.: Transitioning to human interaction with AI systems:
New challenges and opportunities for HCI professionals to enable human-centered AI. Int. J.
Hum.-Comput. Interact. 39(3), 494–518 (2023)
Yang, Y., Siau, K.: A qualitative research on marketing and sales in the artificial intelligence age.
In: MWAIS 2018 Proceedings, vol. 41 (2018)
Yang, Y., Siau, K., Xie, W., Sun, Y.: Smart health: Intelligent healthcare systems in the metaverse,
artificial intelligence, and data science era. J. Organ. End User Comput. 34(1), 1–14 (2022)
Yousefpour, A., et al.: All one needs to know about fog computing and related edge computing
paradigms: a complete survey. J. Syst. Architect. 98, 289–330 (2019)