0% found this document useful (0 votes)
19 views12 pages

Human AIInteractionandAIAvatars

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views12 pages

Human AIInteractionandAIAvatars

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/375910768

Human-AI Interaction and AI Avatars

Chapter · November 2023


DOI: 10.1007/978-3-031-48057-7_8

CITATIONS READS

0 250

2 authors, including:

Keng Siau
City University of Hong Kong
562 PUBLICATIONS 13,538 CITATIONS

SEE PROFILE

All content following this page was uploaded by Keng Siau on 02 February 2024.

The user has requested enhancement of the downloaded file.


Human-AI Interaction and AI Avatars

Yuxin Liu and Keng L. Siau(B)

City University of Hong Kong, Hong Kong, Hong Kong SAR


[email protected], [email protected]

Abstract. Human-Computer Interaction has been evolving rapidly with the


advancement of artificial intelligence and metaverse. Human-AI Interaction is
a new area in Human-Computer Interaction. In this paper, we look at AI avatars,
which are human-like representations of AI systems. AI avatars closely resemble
real persons. AI avatars have several advantages, including enhanced trustworthi-
ness and increased AI system adoption. AI avatars also enable human-like interac-
tion and engagement. AI avatars have their share of issues, such as the potential for
psychological impact, discrimination, and biases. The paper discusses the benefits
and pitfalls of AI avatars and proposes several research directions. This research
paper has theoretical and practical significance. AI avatars are a new phenomenon.
Existing theories, such as those in psychology, social psychology, and commu-
nication, may not hold when applied to AI avatars. AI avatars also present new
features and characteristics, such as ease of customization and personalization,
that are not easily found in humans. Such features and characteristics can be cap-
italized to enhance Human-AI Interaction, but their negative aspects will need to
be understood and managed. For practitioners and AI developers, this pioneering
research provides new insight, understanding, and possibilities of AI avatars that
can be used to further enhance Human-AI Interaction.

Keywords: Human-AI Interaction · Human-Computer Interaction · Artificial


Intelligence · AI Avatars · Metaverse

1 Introduction: AI and Human-AI Interaction


Artificial Intelligence (AI) technology has penetrated almost all aspects of people’s
lives, from chatbots, virtual assistants, recommendation systems, pilotless drones, and
autonomous vehicles that improve the quality of life to the AI translation and text con-
version that assist daily work and routines. The launch of ChatGPT in 2022 intuitively
demonstrates to the public the incredible power and potential of AI. The wave of AI has
undoubtedly swept the world, and the future of AI has become a hot discussion topic.
AI has also influenced the Human-Computer Interaction (HCI) arena, and Human-AI
Interaction (HAII) is one of the latest research areas.

1.1 Artificial Intelligence


AI is a field that encompasses and influences many disciplines, such as computer science,
engineering, biology, psychology, mathematics, statistics, logic, education, marketing,

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023


H. Degen et al. (Eds.): HCII 2023, LNCS 14059, pp. 120–130, 2023.
https://doi.org/10.1007/978-3-031-48057-7_8
Human-AI Interaction and AI Avatars 121

philosophy, business, and linguistics (Buchanan, 2005; Kumar et al., 2016; Ma & Siau,
2018; Yang & Siau, 2018; Siau & Wang, 2020). AI applications that are familiar to
most range from Apple Siri to Amazon Go, and from self-driving cars to autonomous
weapons. AI can be classified into two main categories -- weak AI and strong AI (Hyder
et al., 2019; Wang & Siau, 2019). Weak AI, also known as narrow AI, are AI applications
that specialize in specific tasks. Most current AI applications, such as Google Assistance,
Alpha Go, pilotless drones, and driverless vehicles, can be considered weak AI. However,
AI researchers from different organizations and nations are competing to create strong
AI (also called human-level artificial general intelligence or artificial superintelligence).
Strong AI applications can process multiple tasks proficiently.
Strong AI is a controversial and contentious concept. The main concern of strong
AI is its ability to challenge and may result in replacing humans. Many transhumanists
believe that strong AI will have self-awareness and is equivalent to human intelligence.
Once strong AI becomes a reality, an intelligence explosion will precipitate, and the
enhancement in intelligence will be exponential. Technological singularity may be the
next logical outcome. In other words, strong AI could outperform humans at nearly every
cognitive task. Originally thought to be impossible or something that would happen in
the distant future, the emergence of ChatGPT has cast doubt on the impossibility of
strong AI (Nah et al., to appear).

1.2 Human-AI Interaction


Human-Computer Interaction aims to create effective, efficient, and satisfying interac-
tions between human and computer systems to achieve users’ requirements better (Bevan
et al., 2015; Stephanidis et al., 2019). With the availability of powerful AI algorithms
and applications, HCI has been evolving in the direction of HAII. As one of the latest
directions in the HCI field, HAII attracts much attention from academic researchers and
industrial practitioners, and HAII presents enormous potential.
Conventional HCI works mainly focus on human interaction with non-AI computing
systems, which aim to generate predictable outcomes based on predefined rules (Garibay
et al., 2023). For example, HCI research has looked at the design of icons, organization of
screen layouts, and various input and output media such as voice and gestures. Compared
to non-AI computing systems, the attractiveness and uniqueness of AI systems are that AI
systems exhibit human-like intelligence and autonomous abilities based on advanced AI
algorithms, like deep learning and reinforcement learning (Xu et al., 2023). Therefore,
the main improvement in HAII is the ability to perform tasks and learn to improve
the interaction automatically and often independently without human intervention (de
Visser et al., 2018). Further, HAII allows the interaction to be tailored and customized
to individuals.
The autonomy of AI systems brings both advantages and risks to HAII. Although AI
is increasingly capable of enhancing decision-making and making decisions, the concern
is that people may lose control of the system, which can lead to unintended negative
consequences. Human-centered AI (HCAI), which re-positions humans at the center of
the AI lifecycle and emphasizes human control over AI systems (Shneiderman, 2020),
has gained more and more support. HCAI proposes several key focuses to ensure that
AI systems’ design, implementation, and use should benefit human welfare, including
122 Y. Liu and K. L. Siau

safety, privacy, transparency, explanation, responsibility, and human well-being (Garibay


et al., 2023). Therefore, the design of AI systems and applications in HAII should be
consistent with the vision of HCAI.
User interface (UI) design is critical in HCI (Stone & Al, 2005). Conventional UI
design, like graphical user interface design, focuses on layout design, information pre-
sentation, color scheme, and interactive elements to create a visually appealing, user-
friendly, and intuitive interface (Katerattanakul & Siau, 2003; Alves et al., 2020). For
HAII, as AI systems become more and more autonomous and powerful, the goals of UI
design can shift to providing engaging, tailored, and personalized AI-specific interac-
tions. Voice-based interaction and text-based interaction are currently the mainstream
approaches in HAII. Voice assistants, like Amazon Alexa and Apple Siri, are typical
examples of voice-based HAII. Chatbots and online customer support systems often uti-
lize text-based HAII. For these AI systems, a richer information display mediation will
help improve their effectiveness and user experience according to the media richness
theory (Daft & Lengel, 1986). One potential HAII is using AI avatars to enhance voice-
and text-based interaction. Here, we define AI avatars as computer-generated human-
like representations of AI systems. The AI avatar integrates visual and auditory aspects
to be human-like, including appearance, voice, behavior, and communication skills.
Rapid AI advancement presents many possibilities for the HCI field. This paper
will discuss the new potential of HAII and the challenges these new interfaces present.
Specifically, we believe that using avatars as the display form of AI systems has great
potential to enhance HAII. In the remainder of this paper, we will discuss the possibility
of avatar generation technology, how avatars can improve HAII, challenges resulting
from AI avatars, and the future directions of AI avatar research.

2 Potential Enabled by AI Avatar Technology

AI avatars have great potential to enhance HAII from various aspects. In this study, we
summarize the potential that can be achieved by AI avatar technology from four aspects:
human-like appearance, human-like interaction, customization, and context adaptability.

2.1 Human-like Appearance

Human-like images have been widely used in UI design. Previous studies have found
that the anthropomorphism level of agents can positively influence users’ perception (de
Visser et al., 2016) and increase users’ intention to use (Ling et al., 2021). However,
early anthropomorphic agents at that time are static human-like images with a degree
of cartoonishness. As computer graphics, rendering techniques, and 3D modeling tech-
nologies become more and more advanced, AI avatars can be very realistic, and users
can barely tell the avatars from real humans with their naked eyes. AI system designers
can design AI avatars with different genders, ages, facial traits, and body shapes, just like
natural human appearance, to better interact with users. In addition, many developing
technologies have focused on the consistency of avatar mouth shape and speech content,
realistic eye movement, and finer skin texture, which can further improve the realism of
AI avatars.
Human-AI Interaction and AI Avatars 123

2.2 Human-like Interaction

ChatGPT, the latest AI chatbot technology and a large-scale natural language process-
ing application, has shown AI systems’ potential to express human-like interaction.
ChatGPT passed the Turing test by making a panel of judges think it was a human. It
can interact with users in natural languages and in friendly conversation. Nevertheless,
interacting in text or audio has its limits. Visual representations like AI Avatars can
take advantage of more human-like interaction features. AI avatars enable the exhibition
of facial expressions, gestures, and body language. By combining visual and auditory
presentations, AI avatars have great potential to exhibit the “emotions” of AI systems
to create a more human-like interaction. As technology advances, different tones and
inflections, facial expressions, gestures, and body language can be added to create a
realistic human-like interaction.

2.3 Customization

User preference is one of the most critical concerns for AI designers. Human-centered
AI systems aim to provide satisfying services based on personalized user preferences
(Alves et al., 2020; Garibay et al., 2023). For instance, voice navigation allows users
to select different sounds. ChatGPT can autonomously learn the users’ preferences for
response styles from previous interactions. Using AI avatar as a UI to enhance voice- and
text-based interaction can give users more opportunities for a personalized experience.
AI designers can adopt the strategy of customization to transfer the power of AI avatar
design to the users. So, users can select or design different AI avatars to communicate
with them or assist their work.

2.4 Context Adaptability

AI systems have been applied to various education, healthcare, business, entertainment,


and daily life activities in the real world and virtual worlds (Siau, 2018; Siau et al., 2018;
Yang et al., 2022). Using AI avatars as UI can better support the context adaptability
of AI systems. Appearances (e.g., a person in a doctor or nurse or policeman or soldier
uniform) and demographic attributes (e.g., gender and race) have different influences in
different contexts (e.g., Little et al., 2007; Little, 2014). Similarly, different features of AI
avatars may have distinct effects in different scenarios. For example, in the healthcare
environment in the Metaverse, the AI avatar can appear as a human-like doctor or a
human-like nurse. Also, female AI avatars may be perceived as more trustworthy in the
beauty product sales domain. In contrast, male AI avatars may be perceived to have more
expertise and competence in the engineering procurement field. Young AI avatars may
be more attractive to young users, while elderly AI avatars may seem wiser and more
experienced in teaching and education contexts. Therefore, the context adaptability of
AI avatars deserves sufficient attention. AI designers can flexibly change the gender,
age, attractiveness, facial traits, and body shape of AI avatars to maximize their roles.
124 Y. Liu and K. L. Siau

3 Using AI Avatars to Enhance HAII

3.1 Trust
Trust is a critical factor influencing user adoption and acceptance of AI systems (Siau &
Shen, 2003; Siau & Wang, 2018; Choung et al., 2022). Users are more likely to use AI
systems and interact with them when they perceive AI systems as trustworthy, reliable,
and competent. Using human-like AI avatars as the UI can positively influence users’
trust in AI systems (Jiang et al., 2023).
First, realistic human-like AI avatars can enhance the emotional connection between
humans and AI systems (Culley & Madhavan, 2013), which may contribute to trust build-
ing. Generative AI, like ChatGPT, has shown its ability to communicate with humans
naturally, mimicking human-like conversations. By additionally incorporating human-
like appearance, expressions, and body language, trust in the AI systems is likely to
increase, enhancing users’ adaptability. For instance, in healthcare, AI assistants can
serve as virtual physicians to engage in empathetic conversations and provide patients
with guided mindfulness exercises (Graham et al., 2019). Also, when interacting with
AI assistants with familiar and friendly appearances and behaviors, patients are more
likely to self-disclose more information, which is helpful for mental disease diagnosis
and treatment.
Second, AI avatars give more possibilities to increase AI credibility through a variety
of UI designs. Designers can customize AI avatars by changing their genders, races, ages,
and other visual attributes. They can also leverage their creativity to develop avatars
with distinct facial features, unique dressing styles, and varied speaking styles. It allows
designers to explore different approaches to improve the trustworthiness of AI systems.
By designing the AI avatar’s visual presentation and interaction style to align with user
preferences and expectations, designers can create a UI that enhances the perceived
credibility and trustworthiness of the AI system.

3.2 User Satisfaction

User satisfaction is a key aspect that influences users’ intention to adopt and use AI
systems (Deng et al., 2010; Dang et al., 2018). To be specific, the hedonic value of AI
applications and systems plays a significant role in user satisfaction (Ben Mimoun &
Poncin, 2015). Both the AI avatar itself and the customization function of AI avatars
have the potential to increase the hedonic value of AI systems. With the support of AI
avatars, AI systems can be more than just functional tools.
First, it is novel and interesting for users to interact with a highly human-like AI. AI
avatars enhance the user experience by bringing a sense of surprise, engagement, and
humor. AI avatars’ entertainment value is further increased by having features such as a
narrative and interactive conversation, which improves the user experience and leaves a
lasting impression.
In addition, AI systems that allow users to customize the AI avatars they interact
with will be more appealing to users and increase the hedonic value. Users might view
this customization function as a gamification component that enhances user control and
playfulness. The ability to customize the appearance and voice of AI avatars and how they
Human-AI Interaction and AI Avatars 125

act and communicate can provide users with a more satisfying and enjoyable interactive
experience.

3.3 Cooperation and Collaboration

Real-world Collaboration. Collaboration and cooperation between humans and AI


have become increasingly common. It is inevitable that many traditional human-human
collaborations will gradually evolve into human-AI collaborations (Wang et al., 2020).
In addition to individual human-AI interaction that we mentioned above, AI systems also
have the ability to participate in group cooperation as a member of groups. For example,
in business, AI-based decision-making systems can help the group decision-making
process and promote consensus reaching. They can use their powerful intelligence to
offer valuable suggestions, generate creative ideas, and provide novel insights. In addition
to showing the advantages and disadvantages of each alternative in the text form, it may
be possible to use an AI avatar to present and highlight complex content in a simple and
straightforward way, just like an actual human. With AI avatars as the representation of
AI systems, people can be more accustomed to and likely to cooperate with AI systems
and view them as real partners or colleagues.
Virtual-world Collaboration. As the concept of Metaverse goes viral, the grand vision
of the virtual world is sketched out and expected to become a reality soon. Metaverse and
virtual worlds promise humans an alternative life with various activities (Wang et al.,
2022). Users can interact with other users and AI systems like in the real world. Virtual
worlds also allow users to engage in activities that may not be feasible in the physical
world (Eschenbrenner et al., 2008; Park et al., 2008; Nah et al., 2010; Nah et al., 2011;
Schiller et al., 2014; Nah et al., 2017; Schiller et al., to appear). Through the integration of
Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) technologies,
users can navigate and explore virtual environments, interact with virtual objects, and
engage in virtual experiences that transcend the limitations of reality (Yousefpour et al.,
2019).
AI technology has become an essential technical support to promote the virtual
world. In the virtual world, avatars also represent human users (Galanxhi & Nah, 2007;
Davis et al., 2009). The virtual representations of both users and AI systems can closely
mimic human appearances and behaviors, creating a more authentic and engaging col-
laborative experience. The immersive user experience in the virtual world opens up
new possibilities for human-AI collaboration. Users can work alongside AI avatars as
virtual teammates, leveraging their expertise and capabilities to solve complex prob-
lems, engage in creative endeavors, or explore virtual worlds together. AI avatars can
provide personalized assistance, generate intelligent suggestions, and adapt their behav-
iors to suit the collaborative context, ultimately enhancing virtual-world interactions’
overall effectiveness and enjoyment. The enhanced immersion and interactivity foster a
stronger sense of presence, social connection, and collaboration between human users
and AI avatars.
126 Y. Liu and K. L. Siau

4 Challenges and Future Research Directions

4.1 Ethical and Social Concerns


Psychological Impact. Although there are lots of advantages to enhancing HAII with
AI avatars, it may lead to a negative psychological impact on users. For one, deeper HAII
may influence users’ self-identity. Users may doubt their self-worthiness and values when
interacting with a human-like AI with incredible intelligence and extensive knowledge,
leading to negative self-evaluation. For another, human-like AI avatars may aggravate
users’ emotional overdependence on AI systems. If AI systems can fully understand
users’ emotions and show sufficient patience and empathy, users may attach to and rely
on AI systems. Human-like appearance and behavior of AI systems can exacerbate the
dependence, with unexpected adverse effects like emotional manipulation.
Therefore, the evolving nature of HAII calls for ongoing research to ensure that
the influence on user psychology is positive, empowering, and aligned with individu-
als’ values and well-being. Ethical considerations and thoughtful design practices are
necessary.
Social Interaction. Social interaction is another primary concern in HAII. While AI
avatars have great potential to promote HAII, they may harm face-to-face human inter-
actions. If AI avatars become the primary communication partners of humans, essential
social skills and emotional connections nurtured through human-to-human interactions
may be compromised. From a broader perspective, social interactions are the basis of
society. Human-human interactions enable the formation of groups and organizations
and foster a wide range of social activities.
Therefore, the balance of HAII and human-human interactions is a serious issue that
requires careful consideration. AI systems are developed to enhance human intelligence
but not replace humans. Although AI researchers are expected to strengthen the col-
laboration between human and AI systems, such collaboration should not undermine
human-human interaction and collaboration.
Discrimination and Bias. Human-like AI avatars may result in more serious social
discrimination and bias. AI designers and users have more freedom to create AI interfaces
according to user preferences. A potentially serious problem is the reinforced stereotypes
and prejudices.
Social stereotypes are prevalent and influence people’s social judgment and behavior
toward others (Tajfel, 2010). For instance, a ubiquitous gender stereotype is that males
are more competent than females in most job positions (Eagly & Mladinic, 1994). So,
such prejudice puts women in a disadvantaged position in the job market. Suppose AI
designers take advantage of the gender stereotype and design more male AI avatars for
business assistant AI systems. In that case, a negative consequence is that people will
be more accustomed to communicating with male images, further aggravating gender
inequality in human-human interactions.
Therefore, future studies also need to pay attention to diverse strategies that mitigate
the risk of reinforcing social discrimination and bias through AI avatars. They can further
study how to leverage AI avatars to address the prejudice problems in human-human
interaction. In this way, researchers can help harness the potential of AI avatars to
Human-AI Interaction and AI Avatars 127

promote inclusiveness, mitigate biases, and foster a more equitable and respectful social
environment.

4.2 Distinct Perceptions


AI avatar design is a broad area that has not been fully studied. It is widely known that AI
interfaces should be user-friendly, visually appealing, and trustworthy. However, for AI
avatars, a major concern is whether user perception of human-like AI avatars is equiv-
alent to real humans. That is, whether the various features of AI avatars have the same
impact on user perception as human features. Previous research in social psychology has
sufficiently examined the effect of various features (e.g., gender, age, physical attractive-
ness) on human perception and social judgment (e.g., Krishnan et al., 2019; Duan et al.,
2020). So, additional effort should be made in HAII to examine whether it is appropriate
and effective to utilize the conclusions drawn from previous social psychology studies.
In addition, the customization strategy of AI avatars is also worth further discussion to
maximize its role in improving user experience and increasing user satisfaction.
Another noteworthy issue is the realism level of AI avatars. Is it really or always the
best design strategy to use highly human-like avatars as AI interfaces? Will a certain
level of cartoonishness of AI avatars help HAII in some cases? The opposite perspective
generates from the uncanny valley effect (Mori et al., 2012), which means that people
may have a feeling of weirdness and fear when interacting with non-human objects with
realism close to 100 percent. Researchers should make an effort to test the influence of
realism and other features of AI avatars in different scenarios.

5 Conclusions
AI has become increasingly prevalent in our daily lives, acting as a powerful supplement
to human capabilities. In this paper, we focus on the application potential of AI avatars
in HAII. We systematically summarize the potential enabled by AI avatar technology
and discuss how to use AI avatars to enhance HAII. AI avatars have a bright prospect
to promote AI systems development by increasing user adoption and acceptance. We
believe that AI avatars can be a benefit to individuals, organizations, and the whole
society in the future. We also point out the challenges that AI avatars face and the
directions of future research. Not only academic researchers but all parties of society
should work together to promote the development of human-centered AI systems and
AI avatars and reduce the negative impacts of AI avatars.
This research provides both theoretical and practical significance. The research con-
tributes to developing theories that are specific to Human-AI Interaction and testing
existing theories in the context of Human-AI Interaction. The research also contributes
to AI developers and practitioners by providing suggestions and guidelines for the design
of Human-AI Interaction and AI avatars.

References
Alves, T., Natálio, J., Henriques-Calado, J., Gama, S.: Incorporating personality in user interface
design: a review. Pers. Individ. Differ. 155, 109709 (2020)
128 Y. Liu and K. L. Siau

Ben Mimoun, M.S., Poncin, I.: A valued agent: how ECAs affect website customers’ satisfaction
and behaviors. J. Retail. Consum. Serv. 26, 70–82 (2015)
Bevan, N., Carter, J., Harker, S.: ISO 9241-11 revised: what have we learnt about usability since
1998? In: Kurosu, M. (ed.) HCI 2015. LNCS, vol. 9169, pp. 143–151. Springer, Cham (2015).
https://doi.org/10.1007/978-3-319-20901-2_13
Buchanan, B.G.: A (very) brief history of artificial intelligence. AI Mag. 26(4), 53–60 (2005)
Choung, H., David, P., Ross, A.: Trust in AI and its role in the acceptance of AI technologies. Int.
J. Hum.-Comput. Interact. 39(9), 1727–1739 (2022)
Culley, K.E., Madhavan, P.: A note of caution regarding anthropomorphism in HCI agents. Comput.
Hum. Behav. 29, 577–579 (2013)
Daft, R.L., Lengel, R.H.: A proposed integration among organizational information requirements,
media richness and structural design. Manage. Sci. 32, 554–671 (1986)
Dang, M.Y., Zhang, G.Y., Chen, H.: Adoption of social media search systems: an IS success model
perspective. Pac. Asia J. Assoc. Inform. Syst. 10(2), 55–78 (2018)
Davis, A., Murphy, J., Owens, D., Khazanchi, D., Zigurs, I.: Avatars, people, and virtual worlds:
foundations for research in metaverses. J. Assoc. Inf. Syst. 10, 90–117 (2009)
de Visser, E.J., et al.: Almost human: anthropomorphism increases trust resilience in cognitive
agents. J. Exp. Psychol. Appl. 22, 331–349 (2016)
de Visser, E.J., Pak, R., Shaw, T.H.: From “automation” to “autonomy”: the importance of trust
repair in human–machine interaction. Ergonomics 61, 1409–1427 (2018)
Deng, L., Turner, D.E., Gehling, R., Prince, B.: User experience, satisfaction, and continual usage
intention of IT. Eur. J. Inf. Syst. 19, 60–75 (2010)
Duan, Y., Hsieh, T.-S., Wang, R.R., Wang, Z.: Entrepreneurs’ facial trustworthiness, gender, and
crowdfunding success. J. Corp. Finan. 64, 101693 (2020)
Eagly, A.H., Mladinic, A.: Are people prejudiced against women? Some answers from research on
attitudes, gender stereotypes, and judgments of competence. Eur. Rev. Soc. Psychol. 5, 1–35
(1994)
Eschenbrenner, B., Nah, F., Siau, K.: 3-D virtual worlds in education: applications, benefits, issues,
and opportunities. J. Database Manag. 19(4), 91–110 (2008)
Galanxhi, H., Nah, F.F.-H.: Deception in cyberspace: a comparison of text-only vs. avatar-
supported medium. Int. J. Hum.-Comput. Stud. 65(9), 770–783 (2007)
Garibay, O., et al.: Six human-centered artificial intelligence grand challenges. Int. J. Hum.-
Comput. Interact. 39, 391–437 (2023)
Graham, S., Depp, C., Lee, E.E., Nebeker, C., Tu, X., Kim, H.-C.: Jeste, DV: artificial intelligence
for mental health and mental illnesses: an overview. Curr. Psychiatry Rep. 21(11), 116 (2019)
Hyder, Z., Siau, K., Nah, F.: Artificial intelligence, machine learning, and autonomous technologies
in mining industry. J. Database Manag. 30, 67–79 (2019)
Jiang, Y., Yang, X., Zheng, T.: Make chatbots more adaptive: dual pathways linking human-like
cues and tailored response to trust in interactions with chatbots. Comput. Hum. Behav. 138,
107485 (2023)
Kang, S.-H., Gratch, J.: The effect of avatar realism of virtual humans on self-disclosure in anony-
mous social interactions. In: CHI Extended Abstracts on Human Factors in Computing Systems,
pp. 3781–3786 (2010)
Katerattanakul, P., Siau, K.: Creating a virtual store image. Commun. ACM 46(12), 226–232
(2003)
Krishnan, V., Niculescu, M.D., Fredericks, E.: Should I choose this salesperson? Buyer’s emergent
preference in seller from mere exposure. J. Mark. Theory Pract. 27(2), 196–209 (2019)
Kumar, N., Kharkwal, N., Kohli, R., Choudhary, S.: Ethical aspects and future of artificial
intelligence. In: International Conference on Innovation and Challenges in Cyber Security
(ICICCS-INBUSH) (2016)
Human-AI Interaction and AI Avatars 129

Ling, E.C., Tussyadiah, I., Tuomi, A., Stienmetz, J., Ioannou, A.: Factors influencing users’ adop-
tion and use of conversational agents: a systematic review. Psychol. Mark. 38(7), 1031–1051
(2021)
Little, A.C.: Facial appearance and leader choice in different contexts: evidence for task contingent
selection based on implicit and learned face-behaviour/face-ability associations. Leadersh. Q.
25, 865–874 (2014)
Little, A.C., Burriss, R.P., Jones, B.C., Roberts, S.C.: Facial appearance affects voting decisions.
Evol. Hum. Behav. 28, 18–27 (2007)
Ma, Y., Siau, K.: Artificial intelligence impacts on higher education. In: MWAIS 2018 Proceedings,
vol. 42 (2018)
Mori, M., MacDorman, K.F., Kageki, N.: The uncanny valley [from the field]. IEEE Robot. Autom.
Mag. 19(2), 98–100 (2012)
Nah, F., Eschenbrenner, B., DeWester, D., Park, S.: Impact of flow and brand equity in 3D virtual
worlds. J. Database Manag. 21(3), 69–89 (2010)
Nah, F., Eschenbrenner, B., DeWester, D.: Enhancing brand equity through flow and telepresence:
a comparison of 2D and 3D virtual worlds. MIS Q. 35(3), 731–747 (2011)
Nah, F.F.-H., Schiller, S.Z., Mennecke, B.E., Siau, K., Eschenbrenner, B., Sattayanuwat, P.: Collab-
oration in virtual worlds: impact of task complexity on team trust and satisfaction. J. Database
Manag. 28(4), 60–78 (2017)
Nah, F., Zheng, R., Cai, J., Siau, K., Chen, L.: Generative AI and ChatGPT: applications, chal-
lenges, and AI-human collaboration. J. Inform. Technol. Case Appl. Res. 25(3), 277–304
(2023)
Park, S., Nah, F., DeWester, D., Eschenbrenner, B., Jeon, S.: Virtual world affordances: enhancing
brand value. J. Virtual Worlds Res. 1(2), 1–18 (2008)
Schiller, S.Z., Mennecke, B.E., Nah, F.F.-H., Luse, A.: Institutional boundaries and trust of virtual
teams in collaborative design: an experimental study in a virtual world environment. Comput.
Hum. Behav. 35, 565–577 (2014)
Schiller, S., Nah, F., Luse, A., Siau, K.: Men are from Mars and Women are from Venus: Dyadic
collaboration in the metaverse. Internet Res. (to appear)
Shneiderman, B.: Human-centered artificial intelligence: three fresh ideas. AIS Trans. Hum.-
Comput. Interact. 12, 109–124 (2020)
Siau, K.: Education in the age of artificial intelligence: how will technology shape learning? The
Global Analyst 7(3), 22–24 (2018)
Siau, K., et al.: FinTech empowerment: Data science, artificial intelligence, and machine learning.
Cutter Bus. Technol. J. 31(11/12), 12–18 (2018)
Siau, K., Shen, Z.: Building customer trust in mobile commerce. Commun. ACM 46(4), 91–94
(2003)
Siau, K., Wang, W.: Building trust in artificial intelligence, machine learning, and robotics. Cutter
Bus. Technol. J. 31(2), 47–53 (2018)
Siau, K., Wang, W.: Artificial intelligence (AI) ethics: ethics of AI and ethical AI. J. Database
Manag. 31(2), 74–87 (2020)
Stephanidis, C., et al.: Seven HCI grand challenges. Int. J. Hum.-Comput. Interact. 35(14), 1229–
1269 (2019)
Stone, D., Jarrett, C., Woodroffe, M., Minocha, S.: User Interface Design and Evaluation. Elsevier,
San Francisco, California (2005)
Tajfel, H.: Social stereotypes and social groups. In: Hogg, M.A., Abrams, D. (eds.) Intergroup
Relations: Essential Readings, pp. 132–145. Psychology Press (2010)
Wang, D., et al.: From human-human collaboration to Human-AI collaboration: Designing AI
systems that can work together with people. In: Extended Abstracts of the 2020 CHI Conference
on Human Factors in Computing Systems, pp. 1–6 (2020)
130 Y. Liu and K. L. Siau

Wang, W., Siau, K.: Artificial intelligence, machine learning, automation, robotics, future of work
and future of humanity. J. Database Manag. 30, 61–79 (2019)
Wang, Y., Siau, K.L., Wang, L.: Metaverse and human-computer interaction: A technology frame-
work for 3D virtual worlds. In: Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S. (eds.) HCI
International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial
Intelligence: 24th International Conference on Human-Computer Interaction, HCII 2022, Vir-
tual Event, 26 June – 1 July 2022, Proceedings, pp. 213–221. Springer Nature Switzerland,
Cham (2022). https://doi.org/10.1007/978-3-031-21707-4_16
Xu, W., Dainoff, M.J., Ge, L., Gao, Z.: Transitioning to human interaction with AI systems:
New challenges and opportunities for HCI professionals to enable human-centered AI. Int. J.
Hum.-Comput. Interact. 39(3), 494–518 (2023)
Yang, Y., Siau, K.: A qualitative research on marketing and sales in the artificial intelligence age.
In: MWAIS 2018 Proceedings, vol. 41 (2018)
Yang, Y., Siau, K., Xie, W., Sun, Y.: Smart health: Intelligent healthcare systems in the metaverse,
artificial intelligence, and data science era. J. Organ. End User Comput. 34(1), 1–14 (2022)
Yousefpour, A., et al.: All one needs to know about fog computing and related edge computing
paradigms: a complete survey. J. Syst. Architect. 98, 289–330 (2019)

View publication stats

You might also like