AI Customer Service Perceived 2024
AI Customer Service Perceived 2024
Research article
A R T I C L E I N F O A B S T R A C T
Keywords: Leveraging the computers are social actors theory, in this study, we explore traits of artificial intelligence-based
Chatbot chatbots that make them perceived as trustworthy, drive consumers to forgive the firm for service failure, and
Artificial intelligence reduce their propensity to spread negative word-of-mouth against the firm. Across two scenario-based studies
Service failure
with UK consumers: one in a utilitarian product category (n = 586) and another in a hedonic product category (n
Service recovery
= 508), and a qualitative study, our findings suggest that the perceived safety of chatbots enhances consumers’
Consumer forgiveness
NWOM perceived ability and empathy, and anthropomorphism enhances the benevolence and integrity of chatbots, i.e.,
three traits of chatbots affect components of trustworthiness differently. Further, these traits have a positive
influence on customer forgiveness and a negative influence on negative word-of-mouth.
* Corresponding author.
E-mail address: [email protected] (S. Bhattacharya).
https://doi.org/10.1016/j.ijinfomgt.2023.102679
Received 4 August 2022; Received in revised form 8 June 2023; Accepted 4 July 2023
Available online 11 July 2023
0268-4012/© 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
However, a 2018 study by Invesp, a consulting firm specializing in have altered the nature of service interfaces that humans earlier drove to
conversion rate optimization, across seven sectors, like online retail and one that is technology-dominant today (Castillo et al., 2021). Chatbots
health, found that, on average, only 22% of consumers were willing to can bring forth conversations for commercial purposes, such as inter
rely on chatbots for their needs (Shukairy, 2018). Where acceptance of acting with customers. They have specific traits that differentiate them
chatbots by consumers is low, their usage in case of service failure from employees. For instance, chatbots continuously update themselves
portrays an even darker side of chatbots that could result in customer through machine learning algorithms and have infinite memory. They
aggression (Huang & Dootson, 2022). Thus, it becomes vital for infor take only a fraction of a second to achieve these tasks, whereas humans,
mation systems and marketing managers to explore chatbot traits that because of their backgrounds and learning abilities, are limited in
could reduce a customer’s negative response in case of service failure. In executing such tasks quickly (Wirtz et al., 2018). Chatbots have been
response to urges made by service scholars and changing service re found to influence several marketing consequences, including customer
covery landscape by firms, we bridge the gap in service failure-recovery engagement and customer loyalty (Mostafa & Kasamani, 2022); and
literature by exploring the role of chatbots in effective service recovery. purchase intention (Konya-Baumbach et al., 2023). Extant studies have
Based on the research gaps mentioned above and their significance, also found that the interaction style of the chatbot, such as free text
the present study has two objectives: First, to explore traits of chatbots interaction or button interaction, influences customers’ outcomes of
that make them more trustworthy. Second, to explore the mediating role interaction with a chatbot (Haugeland et al., 2022). Similarly, the
of chatbot trustworthiness in consumer response to SRF efforts, i.e., communication quality of chatbots and the level of entertainment
consumer forgiveness and spreading nWOM. We thus explore the rendered by chatbots also influenced customer outcomes (Cheng &
mediating role of trustworthiness. Jiang, 2022). Scholars have leveraged several theoretical frameworks to
Leveraging the computer are social actors (CASA) theory, we assert explain these marketing outcomes, such as the expectations confirma
that those traits of chatbots that could make consumers perceive them as tion theory (Eren, 2021), CASA (Ashfaq et al., 2020), and justice theory
having more human-like attributes would enhance chatbots’ trustwor (Xing et al., 2022).
thiness. Consequently, consumers would be less likely to spread nWOM However, several users are hesitant to interact with chatbots as the
and more willing to forgive the company. In this context, two traits of personal touch is lacking when the agent is a chatbot than humans.
chatbots can make consumers perceive them more as humans: their Furthermore, as consumers have incurred economic or non-economic
anthropomorphic appearance and empathy depicted in written loss during service failure, they also want an agent who could express
communication, and one aspect that distinguishes them from humans: empathy with the loss experienced by the customer, which is generally
privacy concerns. In the context of service failure, consumers would possible with humans (Nguyen et al., 2022). Consequently, some firms
want to speak to a customer representative who could resolve the hesitate to implement them (Press, 2019). Other issues, including pri
problem empathetically. If chatbots could appear like humans and vacy risks (Kopalle et al., 2022), are also a concern for chatbot inter
reflect empathy in their text messages, they, as per CASA, are more likely action (Rapp et al., 2021). Although extant literature has explored
to be perceived as social actors (Pelau et al., 2021). Thus, anthropo customer satisfaction and dissatisfaction with chatbots (Ruan & Mezei,
morphic chatbots depicting an empathetic communication style should 2022; Suhaili et al., 2021), factors driving its trustworthiness, especially
help raise consumers’ positive responses toward the company after a in service failures, are unknown, despite a plethora of literature
service failure. However, although chatbots could be perceived as social explaining the significance of technology trustworthiness in chatbots
actors by their communication and appearance, consumers are also acceptance (Al-Gahtani, 2011).
aware that chatbots are machines and only behave human-like rather
than human (Lutz & Tamó-Larrieux, 2020). Thus, their privacy concerns 2.2. Dimensions of trustworthiness
regarding personal information remain when interacting with chatbots.
Overall, these three attributes of chatbots, i.e., empathy, anthropo Trust literature separates the concept of trustworthiness (i.e., the
morphism, and privacy concerns, influence consumers’ perceived ability, benevolence, and integrity of a trustee) from that of trust (i.e., the
trustworthiness toward chatbots. Perceived trustworthiness, where the intention to accept vulnerability to a trustee based on positive expec
consumer believes that a chatbot has securely taken service recovery tations of his or her actions) (Hong and Cho, 2011; Riyanto & Jonathan,
steps after understanding customer problems, may calm the consumer’s 2018). Moreover, trustworthiness could lead to trust repair though vice
negative emotions (Vázquez-Casielles et al., 2007). Thus, a consumer versa does not happen (Xie & Peng, 2009). Information systems research
might be more willing to forgive the firm for service failure and spread also suggests that for the trustworthiness of technology, the technology
less nWOM. must fulfill three criteria: perceived ability, benevolence, and integrity
Our study adds to the AI and marketing literature interface, where (Lankton et al., 2015). Perceived ability implies that with technology, a
extant studies have shown how AI-based chatbots influence marketing customer considers if it renders the assured performance (McKnight,
practices in the healthcare industry or branding practices (Liu et al., 2005). For example, a payroll system having features necessary to give a
2021; Yan et al., 2022). We extend this stream of literature on chatbots correct payroll for employees would be perceived as being able to
to explain what traits of chatbots influence customer decisions to forgive deliver the promised. Another critical trait of trustworthiness is benev
a firm for service failure and diminish the propensity to spread nWOM. olence, where individuals should perceive technology as offering enough
We discuss our contributions in detail in the discussion section. help when needed. This aligns with individuals’ hope that they care
This paper proceeds as follows: We first discuss extant literature and enough to offer help when needed (Johnson, 2007). For technology,
then explain our theory and hypothesis. This is followed by the methods users also hope that a technology’s help function will assist them with
and results sections. Lastly, we present the discussion, conclusion, and the information necessary to complete a task (McKnight, 2005). The
managerial implications sections. third component of trustworthiness is integrity, i.e., where individuals
expect technology to remain consistent in its performance (Lankton
2. Theory and hypothesis development et al., 2015). With humans, integrity implies that an individual can be
relied upon to act predictably and consistently. Technology may not be
2.1. Role of chatbots in information systems & marketing persistent in its operations due to inherent defects or situational events
causing failures (Mcknight et al., 2011). By responding predictably to
Chatbots use AI and machine learning (ML) to simulate human inputs (such as responding to queries or printing on command), tech
communication. AI allows chatbots to interpret and interact with human nology influences the user’s perceptions of technology integrity.
beings. ML helps chatbots improve through continuous learning through Past studies have found trustworthiness to influence several mar
customer communication (Wang et al., 2022). AI-powered chatbots keting and customer outcomes, such as customer engagement with the
2
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
brand and their loyalty towards the brand (Kosiba et al., 2018). Also, 2.4. Attributes of chatbots determining perceived trustworthiness
consumers varied in the age group for their perceived trustworthiness of dimensions in service recovery
technology. Younger consumers had a higher perceived ability,
competence, and benevolence of technology than senior customers 2.4.1. Privacy concerns
(Hallikainen et al., 2020). Consumers’ trustworthiness of technology Privacy concerns refer to “the degree to which a consumer is worried
also differed from the trustworthiness they show toward humans. In about the potential invasion of the right to prevent the disclosure of
online interactions, when people presented themselves through their personal information to others” (Baek & Morimoto, 2012, p. 63).
avatars, the perceived trustworthiness of their avatars by others was Chatbots need to collect information before they can initiate service
different from how others rated individuals for their trustworthiness. recovery actions. Such information is collected through direct requests
Thus, the trustworthiness of avatars did not coincide with those of in from the customer. Such requests may scare consumers about data pri
dividuals who interacted with avatars (Machneva et al., 2022). vacy (Huang & Chueh, 2021). Part of the concern could be the relative
Furthermore, in the service recovery scenario, researchers have found newness of technology combined with other data privacy scandals that
that trustworthiness depicted through interactional justice had a supe happened in the recent past, such as with Facebook (Hinds et al., 2020).
rior effect on consumer forgiveness than those who offered distributive
justice (Babin et al., 2021). 2.4.1.1. Perceived ability of chatbots amidst privacy concerns. Consumers
However, we should also note that apart from the trustworthiness worry about losing control over the manner and process through which
traits any agent depicts, an individual’s underlying trusting propensity technology-driven agents such as chatbots could handle their personal
also impacts perceived trustworthiness. Trust propensity implies an in information, i.e., consumers perceive that chatbots are less likely to be
dividual’s proclivity to believe in humanity and espouse a trusting able to keep their information confidential (Wu et al., 2012). In the
outlook toward others (Furner et al., 2022). Those having a positive context of service failure, for instance, when refunds are required,
outlook toward humanity believe that humans are genuine and honest consumers might doubt the ability of chatbots to safely process their
and can be relied on. Such individuals are less likely to be capricious of financial information shared through credit or debit cards. Instead of
others and also easily forgive the mistakes of others (Pica et al., 2022). A resolving service failure, it may result in double service failure due to
trusting outlook implies that an individual believes that irrespective of leakage of this information. Overall, consumers may perceive chatbots
the underlying nature of humans, the net outcome of dealing with as less able to control their information and privacy. Therefore, we
people is always positive (Folkman & Moskowitz, 2000). If individuals, hypothesize:
by their nature or personality, have less inclination to trust others, then
H1a. : Perceived privacy concerns about interacting with chatbots decrease
their perceived trustworthiness of others is likely to be less.
consumers’ perceived ability of chatbots.
2.3. Service failure recovery (SFR)
2.4.1.2. Perceived integrity of chatbots amidst privacy concerns.
Scholars have categorized extant research on service failure recovery Frequently consumers believe that chatbots may indulge in fair utiliza
into three streams. The first stream of literature explores how SFR ac tion of personal information provided by consumers, and they may
tions influence a firm’s performance and recovered service quality become suspicious of the integral intentions of chatbots (Rabbani,
(Baliga et al., 2021). For example, a firm’s performance is increased 2022). When consumers believe that for service recovery, they will need
from service recovery actions as the customer churn rate is reduced, to risk sharing information with AI-based chatbots, where information
which brings in more profits than the cost incurred in rectifying service could be misused, consumers’ perceived integrity of chatbots for helping
failure (Knox and Van Oest, 2014). them with service recovery may decline. Again, when consumers realize
The second stream of literature examines how consumer behavior that there is no way for them to ask or figure out mechanisms that
varies with recovery strategies (Giebelhausen et al., 2014). Due to ser chatbots would use to record the information, they may perceive that
vice recovery efforts, scholars have found that customer satisfaction chatbots may not operate honestly and may misuse the data, thus
increases and anger decreases, spreading less nWOM against the com lowering the perceived integrity of chatbots (Lauer & Deng, 2007).
pany (Casidy & Shin, 2015; DeWitt & Brady, 2003). Hence, we hypothesize:
The third stream of literature examines what inspires consumers to H1b. : Perceived privacy concerns about interacting with chatbots decrease
get involved in customer co-creation for service recovery (Dao and consumers’ perceived integrity of chatbots.
Theotokis, 2021). Dong et al. (2016) suggested that customers’ auton
omy in driving SFR procedures enhanced their motivation to participate 2.4.1.3. Perceived benevolence of chatbots amidst privacy concerns. As
in service recovery procedures. In this paper, integrating information information asymmetry related to data privacy exists between AI-driven
systems and marketing literature, we explore the second stream of chatbots and consumers seeking resolution for service failure, it may
literature. We identify factors that drive consumers to forgive firms for encourage consumers to believe that chatbots are unconcerned about
service failures and reduce nWOM when chatbots make service recovery consumers’ privacy and hence is less benevolent. Thus, the unavail
efforts. ability of accurate and holistic information about how customers’ data,
Customers’ willingness to forgive a brand for service failure is crit including both financial and non-financial, be protected may make
ical. Extant literature also suggests that customers are sometimes willing consumers doubt chatbots’ intention to care enough to resolve con
to forgive a company for service failure due to perceived trustworthi sumers’ problems in a well-protected manner (Martin et al., 2017).
ness, even if the service recovery outcome is not up to the expectation Thus, when customers perceive that their data is vulnerable and unin
(Wei et al., 2020). This willingness to forgive a service failure happens tended uses could harm them through data breaches or identity theft,
when customers are convinced that the service provider made efforts to they may believe that chatbots are not caring enough to protect the data
resolve a service failure issue, i.e., they perceived that the agent (i.e., the and perceive them to be less benevolent.
brand) was benevolent and integral in their effort to resolve the prob Moreover, as service recovery takes place in an online environment
lem. However, if a brand cannot resolve a service problem up to the with chatbots, customers do not get any opportunity to share their
expectation, trust is likely to be broken as the consumers may have concerns about data privacy, unlike in a physical retail environment
hoped that the agent would be able to resolve the problem. where consumer-human interaction may give an opportunity to con
sumer to get their suspicions clarified (Luo et al., 2017). The lack of
clarity chatbots offer about data privacy may make consumers believe
3
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
chatbots need to be more benevolent to provide any assurance regarding 2.4.2.3. Perceived integrity of anthropomorphic chatbot. The perceived
data theft protection (M.K. Hasan et al., 2021; R. Hasan et al., 2021). integrity of anthropomorphic chatbots implies the extent to which
Hence, we hypothesize: consumers perceive chatbots to deal with service failure issues with
utmost care and honesty (Schuetzler et al., 2021). In a service failure
H1c. : Perceived privacy concerns about interacting with chatbots decrease
context, salesperson efforts to help resolve the issue make them appear
consumers’ perceived benevolence in chatbots.
integral (Khamitov et al., 2020). Accordingly, users may also apply the
same integrity criterion that they apply to humans while evaluating a
2.4.2. Anthropomorphism
chatbot (Qiu & Benbasat, 2010). Human-like attributes of anthropo
According to the CASA theory, individuals subconsciously allocate
morphic chatbots, who try to help customers resolve issues, could also
human-like characteristics to technology and solicit gregarious di
make them perceived as integral.
rectives and expectations when interacting with them (Reeves & Nass,
Moreover, as the CASA theory explains, customers may have a pos
1996). The human-like distortions in the cognitive framework of con
itive response due to an enhanced level of social identity (Bickmore &
sumers are even more potent when technology depicts human-like traits
Schulman, 2007). This social identity could reduce the psychological
such as eyes, smiling faces, etc. These human-like traits result in the
distance between customers and AI, making chatbots appear more
anthropomorphism of chatbots in the present study’s context, where
benevolent and integral. Hence, we hypothesize:
anthropomorphism refers to human-like traits depicted by chatbots
(Sheehan et al., 2020). H2c. : Anthropomorphic chatbots enhance consumers’ perceived integrity
in chatbots.
2.4.2.1. Perceived ability of anthropomorphic chatbots. Users could have
a stronger perception of personalized attention when an anthropomor 2.4.3. Perceived empathy
phic chatbot asks questions about their specific concerns after a service Empathy is the act of depicting another person’s emotional experi
failure (Adam et al., 2021). For instance, when an anthropomorphic ence (Plank et al., 1996). Leveraging CASA theory, we examine the
chatbot asks questions like “How could I assist,” consumers may believe remedial effect of empathy of chatbots that enhances their trustwor
that similar to humans, the chatbot is also competent in resolving the thiness after a service failure episode. CASA suggests that individuals
service failure issue. Thus, consumers might expect anthropomorphic tend to regard computers and related technologies, such as chatbots, to
chatbots to examine their queries like humans, render helpful infor have a human role, even if they are aware that the technologies, such as
mation for resolving service failure issues, and further accomplish ser chatbots, have no senses (Nass & Moon, 2000). Chatbots can give
vice recovery like human agents. interactive and language cues to customers that could evoke social re
Scholars have shown that anthropomorphic agents could also be sponses among consumers (Nass & Steuer, 1993). As humans are
considered “creepy” and may raise customer dissatisfaction (Crolic empathetic, when service failure takes place, empathy has to be depicted
et al., 2022; Rajaobelina et al., 2021). This behavior happens as the via communication. For instance, consumers may find service recovery
anthropomorphic response of AI-driven chatbots is considered by con efforts imputed if a customer representative sounds rude or ignores
sumers as a technical representation design that is implanted through customer requests (Migacz et al., 2018). Thus, it may not be easy to
programming (Song et al., 2021). Thus, some users may perceive chat assume all underlying traits of anthropomorphism as resembling
bots as non-emotional, cold, and mechanical when they show humans. For this reason, scholars have treated empathy and anthropo
human-like traits. morphism as independent of each other (Pelau et al., 2021).
However, in the context of service failure, consumers may want to
share their concerns with an agent who is competent and able to resolve 2.4.3.1. Perceived ability of empathetic chatbots. Ability implies whether
the issue. As predicted by CASA theory, anthropomorphic appearance in the customers are likely to perceive chatbots as competent, i.e., con
the context of service failure may make consumers perceive chatbots as sumers believe they have knowledge notable to the expected behavior
agents that are not cold and programmed but able to resolve problems (Park et al., 2021). Doctors are perceived as competent when they
(Teodorescu et al., 2021). Thus, the anthropomorphic appearance of display knowledge about a patient’s disease. Psychologists are perceived
chatbots will make consumers perceive chatbots to be equally compe as competent when they display psychological knowledge about human
tent as humans in resolving service failures (Wang & Benbasat, 2007). behavior. In the context of service failure, an expected response from the
Hence, we hypothesize: agent resolving the problem is to show empathy. Thus, when chatbots
reflect empathy, they are perceived as competent and able to resolve
H2a. Anthropomorphic chatbots enhance consumers’ perceived ability of
customer problems. The empathic response of chatbots could make
chatbots.
consumers believe that chatbots have the ability to understand their
concerns. Hence, we hypothesize:
2.4.2.2. Perceived benevolence of anthropomorphic chatbots. When chat
bots appear human-like, consumers based on CASA theory may also H3a. Perceived empathy of chatbots towards consumers enhances con
believe that chatbot has human qualities, i.e., good intention and sumers’ perceived ability of chatbots.
motivation to resolve the customer issues associated with service failure
(Epley et al., 2007). This goodwill belief about anthropomorphic chat 2.4.3.2. Perceived benevolence of empathetic chatbots. In a service failure
bots makes consumers believe that chatbots are benevolent. As anthro context, if chatbots could respond warmly and compassionately, it could
pomorphic chatbots resemble humans, consumers intend to form elicit within consumers gloomy feelings caused by the service failure,
human-to-human relationships with chatbots (Lee & Choi, 2017). In thus making consumers perceive the chatbots to be benevolent, i.e.,
this process, they begin to believe that even chatbots, by their resem caring about customer issues (Rapp et al., 2021). Studies suggest that in
blance to humans, also depict human traits. In the service failure human-to-human interaction, language and rhetoric have the power to
context, the service agent is expected to show concern for consumers and signal the intent and character of parties in the conversation. For
help resolve the service failure issue. Accordingly, consumers would also instance, when leaders give compassionate speeches, they are consid
perceive chatbots to be benevolent enough and work towards resolving ered more benevolent (Karakas & Sarigollu, 2013). In an
service failure to help a customer recover from failure issues (Adam employee-customer conversation context, when employees issue a su
et al., 2021). We therefore hypothesize: perfluous apology, they signal that they have acknowledged the cus
tomers’ perspective on service failure and expressed regret for the same,
H2b. Anthropomorphic chatbots enhance consumers’ perceived benevo
making the employees perceived as more benevolent (Brooks et al.,
lence of chatbots.
4
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
2014). Chatbots could also provide empathetic cues as they converse rectify the same (Verhagen et al., 2013).
with consumers for service recovery, making consumers perceive them However, when firms take actions to recover from a service failure,
as empathetic. such as chatbots resolving consumer issues, consumers are likely to focus
on efforts made by an able, benevolent, and integral chatbot to resolve
H3b. Perceived empathy of chatbots towards consumers enhances con
the issue. When consumers perceive chatbots to be trustworthy by their
sumers’ perceived benevolence of chatbots.
benevolence and integrity towards resolving the problem, consumers’
intent to penalize the company may decline, resulting in less spread of
2.4.3.3. Perceived integrity of empathetic chatbot. Integrity implies that
nWOM. Hence, we hypothesize:
chatbots will continue to show empathetic responses as the communi
cation proceeds between the customer and the chatbot to resolve the H4a. Perceived ability of chatbots encourages consumers to forgive firms
service failure issue (Ramesh & Chawla, 2022). Consumers apply soci for service failures.
etal principles and interpersonal interaction practices in this commu
H4b. Perceived benevolence of chatbots encourages consumers to forgive
nication process with chatbots (Park et al., 2021). As chatbots remain
firms for service failures.
consistently compassionate while resolving customer queries through
bicommunication, the consistent caring and genuine attitude of chatbots H4c. Perceived integrity of chatbots encourages consumers to forgive firms
in this bicommunication will likely make them be perceived as integral for service failures.
(Schiemann et al., 2019), thus enhancing trustworthiness. Hence, we
H5a. : Perceived ability of chatbots encourages consumers to reduce
hypothesize:
nWOM against firms in case of service failures.
H3c. Perceived empathy of chatbots towards consumers enhances con
H5b. : Perceived benevolence of chatbots encourages consumers to
sumers’ perceived integrity in chatbots.
reduce nWOM against firms in case of service failures.
H5c. : Perceived integrity of chatbots encourages consumers to reduce
2.5. Trustworthiness of chatbots and consumer’s propensity to forgive nWOM against firms in case of service failures.
service failure and nWOM
In the above five sets of hypotheses, we discussed how chatbots’
traits influence their perceived trustworthiness and how this trustwor
When service failure happens, consumers stop doing business with
thiness influences consumers’ willingness to forgive service providers
the service provider (Grégoire et al., 2009) or seek retaliation by
and spread less nWOM. Corollary, the trustworthiness dimensions
spreading nWOM (Wangenheim, 2005). Fetscherin and Sampedro
mediate chatbot traits and consumer response to service failure re
(2019) defined forgiveness as letting negative emotions waive off,
lationships. Hence, we hypothesize:
resulting from the wrongdoing of oneself, others, or situations. Once
consumers encounter service failure, they experience emotional state H6. : Perceived ability, benevolence, and integrity mediate chatbot traits
changes (Gaudine & Thorne, 2001). Several factors influence the way (i.e., privacy concerns, anthropomorphism, and perceived empathy) and
consumers deal with their disappointment. For example, the prior customer outcome (i.e., forgiveness and nWOM) relationship.
relational bond between the customer and the service provider could
Fig. 1 presents the conceptual model.
influence the consumer’s propensity to regulate their emotional state
positively and forgive the brand (Joireman et al., 2016).
3. Data and method
In the context of service failure, scholars have reported that con
sumer personalities such as religiosity and spirituality influence their
3.1. Design
propensity to forgive the firm for service failure (Tsarenko & Tojib,
2012). Among firm-level efforts, extant literature has found that asking
Across two studies, we used scenario-based approach which re
for apologies, giving voice to consumers, and offering compensation
searchers have extensively used in studies related to service failure and
enhances consumers’ propensity to forgive the firm (Harrison-Walker,
recovery (Park & Ha, 2016; Singh & Crisafulli, 2015; Smith et al., 1999).
2019). Scholars have also found the role of perceived justice in con
This suitability of scenario-based studies is because a) compared to the
sumers’ willingness to forgive the firm after service failure (Babin et al.,
recall-based approach or retrospective self-reports (Roggeveen et al.,
2021); for some consumers, forgiveness happens when a firm offers both
2012), they are more robust, as a recall-based approach is sensitive to
apologies and compensation (Casidy & Shin, 2015).
respondent’s memory lapses, “rationalization tendencies and consis
Studies also suggest that when consumers exhibit a positive attitude
tency factor” (Roggeveen et al., 2012; p. 774), b) the scenario-based
toward brands, i.e., perceive the brand as trustworthy, they pardon
approach is also better than the enactment of real-life setting service
service failures (Cheng et al., 2012); and are not much influenced by
failure, given it is more prone to ethical and managerial issues (Park &
nWOM (Ho-Dac et al., 2013). Extending this literature to chatbots, when
Ha, 2016), and c) the scenario-based approach is also favorable than
chatbots are perceived as more able, competent, compassionate, and
observation or enactment-based field studies because it reduces the
caring, i.e., trustworthy, consumers having a positive attitude towards
challenges with expenses and times involved (Smith et al., 1999).
chatbots may intend to forgive the service provider for service failure.
In the two studies, we created hypothetical service failure and re
Word-of-mouth is another critical outcome of service failure and
covery scenarios with an online retailer. Exploratory research with 47
recovery efforts (Choi & Choi, 2014). Harrison-Walker (2001, p. 63)
[Females= 25] postgraduate students in a university in the North East of
defined word-of-mouth as “informal, person-to-person communication
the UK revealed that they experienced most instances of service failure
between a perceived non-commercial communicator and a receiver
in online retailing (34%), followed by banking (26%) and airline
regarding a brand, a product, an organization, or a service.”
bookings (24%). The exploratory study further revealed that firms used
Word-of-mouth can be positive or negative depending on how fairly a
chatbots during the service recovery process, with participants experi
brand treats its customers (Wang et al., 2021).
encing the most exposure to chatbots in online retailing (30%), followed
Dissatisfied customers are more likely to use nWOM as they want to
by banking and airline bookings (28%). We also asked participants
express their displeasure. Consumers may spread negative word of
about their experience with service recovery efforts when interacting
mouth for three reasons: when they want the firm to pay attention to
with chatbots. Surprisingly, on average, 28% reported having a positive
dissatisfaction causes or when they want their friends and relatives to
experience with service recovery efforts across online retailing, banking,
not suffer similar negative experiences with the focal service provider,
and airline bookings. Based on these exploratory research findings, we
or when consumers want to express their feelings so that company could
5
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
identified online retailing as the service setting of our hypothetical 3.2. Pretest
scenario.
In another exploratory study with 28[Females= 15] postgraduate Before we conducted the two studies, we employed a pretest. The
students from another UK university, 57% of participants considered objective of the pretest was to a) evaluate if the developed scenarios
delays in receiving an order as a major service failure instance in online performed as expected, b) check the validity of measurement scales, c)
retailing. Therefore, in both the studies, we developed scenarios in the enhance the survey questions’ quality and d) test and adjust the survey.
online retailing failure context, specifically delays in receiving an or Eighty-one postgraduate students (Females = 40) participated in this
dered item. Using the services of a professional graphics designer, we pilot test. After making the necessary improvements, we proceeded with
developed a chatbot, “Russell,” trying to recover a service failure the scenario-based studies.
instance. We developed four scenarios of service failure and recovery.
In the first two scenarios, the product category was toilet tissue rolls, 3.3. Participants
a utilitarian product. In the remaining two scenarios, the product cate
gory was earphones, a hedonic product. To identify the product cate In Study 1, we collected data from UK consumers using a purposive
gories, we undertook exploratory research with 37 [Females= 18] sampling strategy (Dörnyei & Lunardo, 2021; Talwar et al., 2021). We
postgraduate students from a UK university. We provided the students used purposive sampling because past literature has used this sampling
with a list of ten product categories, and using the Voss et al. (2003) strategy to examine consumer-related specific issues. For example, Tsai
ten-item hedonic and utilitarian scale, we identified toilet tissue rolls and Su (2009) in the service failure and recovery context and Ameen
and earphones as the utilitarian and hedonic products for Study 1 and 2, et al. (2022) in the context of chatbots, augmented reality, and social
respectively. The Voss et al. (2003) scale is a ten-item semantic differ media. We also used purposive sampling because the criteria for being a
ential scale. Utilitarian items included: “effective/ineffective, help part of this study was consumers having experienced service failure and
ful/unhelpful, functional/not functional, necessary/unnecessary, and recovery in an e-retailing context and interaction with chatbots in this
practical/impractical.” The hedonic items of the scale were: “not fun/ process. The data for the study was obtained between January 2022 to
fun, dull/exciting, not delightful/delightful, not thrilling/thrilling, and February 2022. We used Prolific for administering the survey ques
enjoyable/unenjoyable.” We considered utilitarian and hedonic product tionnaire. Extant research has extensively used Prolific, an online plat
categories as consumer behavior could change across utilitarian versus form for respondent recruitment. First, we invited respondents to
hedonic product categories (Roy & Ng, 2012). participate in the study. Once they accepted the invitation, we asked
We also conducted another exploratory study with 46 [Females filtering questions about their experience with service failure and re
= 23] postgraduate students from a UK university to identify the sce covery in an e-retailing context and interaction with chatbots in that
narios we could use in the two studies. Based on students’ responses, we process.
used the following scenario across the two studies: A customer was Next, study participants responded to the scale items, followed by
experiencing a service failure as their ordered item [toilet tissue rolls in demographic questions on age, gender, education, and annual income.
Study 1 and an earphone in Study 2] was not delivered by the promised We obtained 628 questionnaires, which also met the filtering criteria. Of
date by Tuple.com, a hypothetical e-retailer. Russell, the chatbot of these 628 responses, we obtained 586 [Females= 304] completely
Tuple.com, first tried to understand the failure issue that the customer filled-in questionnaires. The final sample consisted of UK consumers
was experiencing. Next, they apologized for the inconvenience, tracked only. The median age and income of the respondents were 31.02 years
the package, and offered an alternate date and time for the delivery of and £ 32,000, respectively.
the ordered item or a full refund. The customer agreed to an alternate Following a similar strategy as Study 1, in Study 2, we obtained 508
delivery date and time. We also asked participants of this exploratory [Females= 260] filled-in questionnaires from UK-based participants
research about a) the believability of the scenario using the following [Median Age= 33.78 years; Median Income= £32,820]. In both studies,
question: "I think the scenario is believable," b) the believability of the our final sample was skewed toward younger adults compared to the UK
chatbot: "I believe in the scenario the customer was interacting with a population. Table 1 presents the sample demographics of studies 1 and
chatbot" and c) that the context of the scenario was "service failure and 2.
recovery." The first two items were measured using seven-point Likert
scales ranging from "1" not at all believable to "7" "completely believ
3.4. Measures
able." The third item was measured using a seven-point Likert scale
ranging from "1" strongly disagree to "7" strongly agree. Appendices 1.1
3.4.1. Consumer forgiveness
and 1.2 present the scenarios of studies 1 and 2.
Following extant literature (Harrison-Walker, 2019; McCullough
et al., 2003; Rye et al., 2001), we measured consumer forgiveness using
6
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
Table 1
Demography of the sample.
Study 1 (N = 586) Study 2 (N = 508)
a 12-item scale. The consumer forgiveness scale consists of two sub personal, on a seven-point scale ranging from “1” (describes very poorly)
scales: the absence of negative responses (six items) and the presence of to “7” (describes very well) (Araujo, 2018; Kim and Sundar, 2012). The
positive responses (six items). A sample of items measuring the absence Cronbach’s alpha of the scale was 0.86 and 0.87 for studies 1 and 2,
of negative responses scale is: “I won’t stop thinking about how I was respectively.
wronged by the e-retailer,” “This e-retailer’s wrongful actions will keep
me from enjoying life,” and “I will spend time thinking about ways to get 3.4.5. Perceived empathy
back at the e-retailer who wronged me.” A sample of items measuring We measured perceived empathy using a five-item scale adopted
the presence of positive responses scale is: “I wish for good things to from Croes and Antheunis (2021) and Stiff et al. (1988). Sample of scale
happen to the e-retailer who wronged me,” I have compassion for the items was: “The chatbot said the right thing to make me feel better,”
e-retailer who wronged me,” and “I forgive the e-retailer for what they “The chatbot responded appropriately to my feelings and emotions,” and
did to me.” In the present study, we found both the sub-scales to be “The chatbot came across as empathic.” Consumers rated each of the
positively (0.426) and significantly related (p < 0.001), which is items on a seven-point Likert scale ranging from “1” (strongly disagree)
consistent with extant literature on forgiveness (Harrison-Walker, 2019; to “7” (strongly agree) to measure the scale items. The Cronbach’s alpha
Rye et al., 2001). Forgiveness literature also explicitly mentions that of the total scale was 0.82 and 0.79 for studies 1 and 2, respectively.
both the sub-components of forgiveness are “intertwined and therefore
inseparable” (Harrison-Walker, 2019; p. 382), and researchers should 3.4.6. Perceived ability, perceived benevolence, and perceived integrity
conduct further analysis using the scale in its completeness and not as We measured each of the constructs of perceived ability, perceived
two sub-constructs (Asgari & Roshani, 2013; Harrison-Walker, 2019; benevolence, and perceived integrity using four-item scales adapted
Rye et al., 2021). Thus, we considered all the 12 items together and not from Akter et al. (2011). Sample scale items are: “The chatbot performs
as separate scales. We used seven-point Likert scales ranging from “1” its role very well,” “The chatbot has good intentions towards me,” and “I
(strongly disagree) to “7” (strongly agree) to measure the scale items. would characterize the chatbot as honest.” All the 12 items were
The Cronbach’s alpha of the total scale was 0.89 for study 1 and 0.86 for measured using a seven-point Likert scale ranging from “1” (strongly
study 2. disagree) to “7” (strongly agree). The Cronbach’s alpha values of
perceived ability, perceived benevolence, and perceived integrity scales
3.4.2. nWOM were 0.91, 0.86, and 0.92 for study 1 and 0.85, 0.84, and 0.87 for study
We measured nWOM using a six-item scale adopted from Harri 2.
son-Walker (2019). Sample scale items included: “I will complain to
friends or family about this e-retailer,” “I will say negative things to 3.5. Control variables
others in the community about this e-retailer,” and “I will try to convince
friends or relatives not to use this e-retailer.” We used seven-point Likert Following extant research studies (Zafar et al., 2021), particularly in
scales ranging from “1” (strongly disagree) to “7” (strongly agree) to the technological context (Cheng & Mitomo, 2017), we controlled de
measure the scale items. The Cronbach’s alpha of the total scale for mographic variables such as age (we took natural logarithm to reduce
studies 1 and 2 were 0.87 and 0.89, respectively. variability), gender (Female dummy coded as “1” and male “0”), and
education (dummy variable) to ensure variance in these demographic
3.4.3. Perceived privacy concerns variables does not influence the results of the empirical analysis.
We measured consumers’ perceived privacy concerns about chat We also controlled consumers’ trait anger (Gambetti & Giusberti,
bots, adapting a three-item scale from Zhang et al. (2019). The scale 2009) and dispositional compassion (Shiota et al., 2006) as they have
items included: “I am concerned that the chatbot will collect too much been found to influence the propensity to forgive (Fehr et al., 2010).
personal information from me,” “I am concerned that the chatbot will Consumers who tend to be short-tempered or angered easily are less
use my personal information for other purposes without my authoriza likely to forgive firms for their mistakes. We measured trait anger using a
tion,” and “I am concerned that the chatbot will share my personal in 10-item scale (Gambetti & Giusberti, 2009). Sample items included: "I
formation with other entities without my authorization.” We used get angry when I have to wait because of other’s mistakes" and "I feel
seven-point Likert scales ranging from “1” (strongly disagree) to “7” infuriated when I do a good job and get a poor evaluation." We measured
(strongly agree) to measure the scale items. The Cronbach’s alpha of the dispositional compassion using Shiota et al.’s (2006) five items scale.
scale was 0.83 and 0.80 for studies 1 and 2, respectively. Sample items included: "It’s important to take care of people who are
vulnerable" and "I often notice people,e who need help." We used a
3.4.4. Anthropomorphism seven-point Likert scale to measure each item of dispositional anger and
We measured the perceived anthropomorphism of chatbot by asking dispositional compassion scales ("1" = strongly disagree and "7" =
participants to rate four adjectives: likable, sociable, friendly, and strongly disagree"). Dispositional compassion could make consumers
7
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
concerned about the pain and problem of others. Consequently, given 4.3. Common method bias
consumers’ dispositional compassion for the company, they are less
concerned about their problems and are more willing to forgive or less According to Podsakoff et al. (2003), common method bias is a
willing to spread nWOM. The reliability of the dispositional anger and critical issue in questionnaire-based single-survey studies. Following the
dispositional scales were 0.77 and 0.81, respectively, in study 1 and 0.79 procedures recommended by Podsakoff et al. (2003) and Lindell and
and 0.80 in study 2. Whitney (2001), we followed several steps to control common method
bias in the present study. As a first step, respondent anonymity was
4. Study 1: Results maintained, and they also received assurance for the same.
Next, we made efforts to randomize the order of the questions. Third,
4.1. Test of scenario believability and chatbot and identification of we employed a single-factor CFA, which revealed an extremely poor fit
context (Chi-square/df =13.26; RMSEA = 0.308; CFI = 0.577; TLI = 0.512),
indicating the least influence of common method bias. We also carefully
Similar to one of the exploratory research projects, we asked par placed in the questionnaire several filler questions and two marker
ticipants about a) the believability of the scenario, b) the believability of variables to achieve psychological separation. Following extant litera
the chatbot, and c) the context of the scenario. Respondents reported ture and the guidelines provided by Malhotra et al. (2006) that a marker
that a) the scenario was believable [M= 5.89, t(584) < 0.001], b) the variable should be theoretically unrelated to the focal constructs of the
chatbot was believable [M= 6.01, t(584) < 0.001], and c) the context of study, we included the seven-item generalized anxiety disorder (Fitz
the scenario was service failure and recovery [M= 5.23, t(584) < simmons-Craft et al., 2022) and income (Blut et al., 2021) as marker
0.001]. Additionally, we also employed Voss et al.’s (2003), ten-item variables. The marker variables had insignificant correlations
hedonic and utilitarian scale. Study 1 participants, overwhelmingly, (>p = 0.10) with the study’s focal constructs (Lindell & Whitney, 2001).
considered toilet tissue rolls as utilitarian product. These steps indicated that common method bias was not an issue in the
present study.
4.2. Descriptive Statistics
4.4. Social desirability bias
In Table 2, we present the descriptive statistics of the variables in the
study. There are positive and statistically significant correlations be In the present study, we also checked for the influence of social
tween the antecedents (i.e., perceived privacy concern, anthropomor desirability bias (De Vellis, 1991; Richins & Dawson, 1992), employing
phic chatbots, and perceived empathy) and the mediators [i.e., the short version (i.e., ten items) of Crowne and Marlowe’s (1960) social
perceived ability (rprivacy concerns, perceived ability=− 0.21, p < 0.001; ran desirability scale. Sample of items included was: “I’m always willing to
thropomorphic chatbots, perceived ability =0.25, p < 0.001; rperceived empathy, admit to when I make a mistake” and “I always pay attention to the way I
perceived ability=0.34, p < 0.001), perceived benevolence (rprivacy concerns, dress.” We measured each scale item using a “True” or “False” dichot
perceived benevolence=− 0.06, p < 0.10; ranthropomorphic chatbots, perceived omous scale. The calculated social desirability had weak and insignifi
benevolence =0.28, p < 0.001; rperceived empathy, perceived beenvolence=0.24, cant correlations with the study constructs. Therefore, our overall
p < 0.001), and perceived integrity (rprivacy concern, perceived inter conclusion was responses to perceived privacy concerns, anthropomor
grity=− 0.05, p < 0.10; ranthropomorphic chatbots, perceived integrity =0.33, phism, perceived empathy, perceived benevolence, perceived ability,
p < 0.001; rperceived empathy, perceived integrity=0.28, p < 0.001). perceived integrity, consumer forgiveness, and nWOM were not influ
Further, perceived ability, perceived benevolence, and perceived enced by social desirability.
integrity are statistically significantly correlated in the expected direc
tion to consumer forgiveness (rperceived ability, consumer forgiveness=0.26,
p < 0.001; rperceived benevolence, consumer forgiveness =0.25, p < 0.001; rper 4.5. Measurement model
ceived integrity, consumer forgiveness =0.31, p < 0.001) and nWOM (rperceived
Using MPLUS 8.0, we conducted a confirmatory factor analysis for a
ability, nWOM=− 0.20, p < 0.001; rperceived benevolence, nWOM =− 0.22,
p < 0.001; rperceived integrity, nWOM=− 0.26, p < 0.001), respectively. test of the confirmatory model. The measurement model reported a good
These initial results provide elementary evidence regarding our stated fit (Chi-square/df =2.01; RMSEA = 0.04; CFI = 0.96; TLI = 0.97).
hypotheses. Next, using Fornell and Larcker’s (1981) mechanism, we assessed the
constructs’ convergent and discriminant validities. The steps involved:
a) calculating for each construct the average variance extracted (AVE)
and the composite reliability (CR) and b) comparison of the constructs’
Table 2
Study 1- Correlation matrix and descriptive statistics.
1 2 3 4 5 6 7 8 9 10 11 12 13
1 Consumer forgiveness 1
2 nWOM -0.43 1
3 Perceived privacy concerns -0.16 0.21 1
4 Anthropomorphism 0.21 -0.15 -0.18 1
5 Perceived empathy 0.18 -0.11 -0.07 0.20 1
6 Perceived ability 0.26 -0.20 -0.21 0.25 0.34 1
7 Perceived benevolence 0.25 -0.22 -0.06 0.28 0.24 0.41 1
8 Perceived integrity 0.31 -0.26 -0.05 0.33 0.28 0.48 0.51 1
9 Ln Age 0.13 -0.18 0.10 0.09 0.21 0.07 0.05 0.11 1
10 Gender 0.14 -0.13 0.18 0.04 0.10 0.03 0.09 0.05 0.03 1
11 Education 0.11 -0.21 0.10 0.07 0.09 0.08 0.03 0.02 0.05 0.05 1
12 Dispositional anger -0.25 0.17 0.06 0.05 0.10 - 0.11 - 0.08 -0.06 -0.09 -0.03 -0.14 1
13 Dispositional compassion 0.31 -0.22 0.08 0.06 0.14 0.13 0.13 0.09 0.12 0.07 0.15 -0.25 1
Mean 5.73 5.92 5.84 5.31 5.16 5.21 5.65 4.92 3.37 0.53 0.65 4.53 4.81
S.D. 1.31 1.12 1.43 0.89 1.1 0.91 1.13 1.06 2.9 0.49 0.41 1.05 1.16
* **r > 0.136, p < 0.001; * *r = 0.11–0.13, p < 0.01, *r = 0.083–0.010, p < 0.05; #, r = 0.07–0.082, p < 0.10
8
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
square root of AVEs with the construct correlations. All the constructs willingness to spread negative word of mouth. Benevolence implies that
had acceptable convergent and discriminant validities. In Table 3, we the agent is a well-wisher of the focal party and that efforts are intended
present the constructs’ reliability and validity measures. to benefit the focal party. Thus, when consumers perceive chatbots to be
benevolent, consumers realize that chatbots are doing their best to
4.6. Test of Hypotheses recover from service failures and help the customer to the best possible
extent. Thus, realizing the benevolent intent of chatbots, consumers
We tested hypotheses 1–6 using a structural equation model. We decide not to retaliate by spreading negative word of mouth against the
employed MPLUS 8.0 to test the structural model. Table 4 presents the company. Thus, we receive evidence in support of H5b.
results of the hypothesis tests. Through H5c, we hypothesized that the perceived integrity of chat
Through hypothesis 1, we predicted that perceived privacy concern bots reduces consumers’ willingness to spread nWOM. Perceived
was negatively associated with perceived ability, benevolence, and integrity implies adherence to a set of sound principles. The perceived
integrity. Our analysis indicated that perceived privacy concerns had a integrity of chatbotS during service recovery implies that consumers
negative and statistically significant effect on perceived ability believed in chatbots and used set standards of moral principles to resolve
(β = − 0.104 p < 0.001). However, the effect of perceived privacy customer issues. Due to this perceived morality of chatbots, consumers
concern on perceived benevolence (β = − 0.108 p < 0.1) and perceived drop their intention of spreading negative word of mouth against the
integrity (β = − 0.112 p < 0.1) though negative, was insignificant. company. Thus, we receive evidence in support of H5c. Overall, we
Hence, we receive only partial evidence in support of the first receive evidence in support of H4 and H5.
hypothesis. We predicted through the sixth hypothesis that perceived ability,
A test of hypothesis two revealed that chatbot anthropomorphism perceived benevolence, and perceived integrity act as mediators be
had a positive impact on perceived ability (β = 0.146, p < .001), tween the antecedents (perceived privacy concerns, anthropomorphism,
perceived benevolence (β = 0.157, p < 0.01), and perceived integrity and perceived empathy) and customer outcome (i.e., consumer
(β = 0.163 p < .001) of chatbot. We thus receive evidence in support of forgiveness and nWOM). To test the mediation models, we employed
the second hypothesis. Hayes’ (2018) procedure and further employed a bootstrapping
According to the third hypothesis, the perceived empathy of the re-sample value of 5000. In Table 5, we present the results of the
chatbot was positively and significantly associated with the chatbot’s mediation analyses.
perceived ability (β = 0.201, p < .001), perceived benevolence In Column 1 of Table 5, we observe that the estimated path coeffi
(β = 0.198, p < .001), and perceived integrity (β = 0.164, p < 0.01). cient for the indirect effect of perceived privacy concerns on consumer
Hence, we receive evidence in support of hypothesis 3. forgiveness through perceived ability was statistically significant
Next, as predicted through hypotheses 4 and 5, perceived ability, (θ = − 0.0084; LCI=− 0.0127; UCI=− 0.0041). Also, from Column 2 of
perceived benevolence, and perceived integrity had a positive influence Table 5, we observe that the estimated path coefficient for the indirect
on customer forgiveness (βAbility= 0.085, p < .001; βBenevolence= 0.133, effect of perceived privacy concerns on nWOM through perceived ability
p < .001; βIntegrity= 0.186 p < .001) and negative influence on nWOM was statistically significant (θ = 0.0095; LCI=0.0036; UCI=0.0154).
(βAbility= − 0.092, p < .001; βBenevolence=− 0.118, p < .001; βIntegrity= In Column 3 of Table 5, we observe that the estimated path co
− 0.223, p < .001). The perceived ability of chatbots increased con efficients for the indirect effects of anthropomorphism through
sumers’ willingness to forgive the brand for service failure. This implies perceived ability (θ = 0.0124; LCI=0.0055; UCI=0.0193), perceived
that chatbots’ ability to demonstrate their efforts towards recovering benevolence (θ = 0.0209; LCI=0.0138; UCI=0.0281), and perceived
from service failures calmed the consumers’ and they acknowledged the integrity (θ = 0.0303; LCI=0.0159; UCI=0.0447) on consumer
efforts of chatbots by being willing to forgive the firm for service failure. forgiveness were statistically significant. Also, in Column 4 of Table 5,
Thus, we receive evidence in support of H4a. we observe that the estimated path coefficients for the indirect effects of
Perceived benevolence of the chatbot also increased consumers’ anthropomorphism through perceived ability (θ = − 0.0134;
willingness to forgive the firm for service failure. Thus, consumers were LCI=− 0.0200; UCI=− 0.0068), perceived benevolence (θ = − 0.0185;
willing to give up their retaliation, intent, and other kinds of destructive LCI=− 0.0276; UCI=− 0.0094)), and perceived integrity (θ = − 0.0363;
behaviors and to respond positively to the benevolent behavior of LCI=− 0.0514; UCI=− 0.0212)) on nWOM were statistically significant.
chatbots. We thus receive evidence in support of H4b. Finally, in Column 5 of Table 5, we observe that the estimated path
Integrity-based service failure implies that there exist potential coefficients for the indirect effects of perceived empathy through
fundamental flaws in moral character. Thus, when chatbots are perceived ability (θ = 0.0171; LCI=0.0092; UCI=0.0250), perceived
perceived to be integral, it adds to their social evaluation, and con benevolence (θ = 0.0263; LCI=0.0162; UCI=0.0364), and perceived
sumers may believe that the failure was not intentional to harm the integrity (θ = 0.0305; LCI=0.0154; UCI=0.0457) on consumer
consumer. Thus, the recovery efforts made by a perceived integral forgiveness were statistically significant. Similarly, in Column 6 of
chatbot would reduce retaliation intent among consumers, and they Table 5, we observe that the estimated path coefficients for the indirect
would be more willing to forgive the firm for service failure. Thus, we effects of perceived empathy through perceived ability (θ = − 0.0184;
receive evidence in support of H4c. LCI=− 0.0272; UCI=− 0.0096), perceived benevolence (θ = − 0.0233;
For our fifth hypothesis, we suggested that chatbots’ perceived LCI=− 0.0355; UCI=− 0.0113), and perceived integrity (θ = − 0.0365;
ability, benevolence, and integrity would reduce consumers’ propensity LCI=− 0.0529; UCI=− 0.0201) on nWOM were statistically significant.
to spread nWOM. Our findings suggest that the perceived ability of Thus, we receive evidence in support of hypothesis six.
chatbots reduced consumers’ propensity to spread negative word of
mouth. The perceived ability of chatbots implies their perceived level of 4.7. Discussion
technical know-how and skills for conducting effective service recovery
efforts. As chatbots depict service recovery efforts, it signals their in Overall, we find that three dimensions of perceived trustworthiness,
telligence and efficiency. Given that chatbots have demonstrated their i.e., perceived ability, perceived benevolence, and perceived integrity,
expertise, it made consumers believe in chatbots’ ability to resolve the act as mediators of perceived privacy concerns, chatbot anthropomor
issue, and accordingly, they lowered their propensity to spread negative phism, and perceived empathy traits and consumer response to service
word of mouth against chatbots. Thus, we receive evidence in support of recovery efforts relationship. However, only perceived ability acted as a
H5a. mediator for perceived privacy concerns about chatbots. As such, the
Through the second subsection of the fifth hypothesis, we asserted perceived privacy concerns reduced the perceived ability of chatbots,
that the perceived benevolence of chatbots decreases consumers’ which decreased consumer propensity to forgive service failure and
9
A. Agnihotri and S. Bhattacharya
Table 3
Study 1- Convergent and Discriminant Validity.
Convergent Validity Discriminant Validity
Constructs Items Factor Cronbach’s Composite AVE Consumer nWOM Perceived Anthropomorphism Perceived Perceived Perceived Perceived
Loading Alpha Reliability forgiveness privacy empathy ability benevolence integrity
concerns
Consumer I won’t stop thinking about 0.88 0.89 0.95 0.61 0.78 *
forgiveness how I was wronged by the e-
retailer. (R)
I will spend time thinking 0.78
about ways to get back at
the e-retailer who wronged
me. (R)
I will avoid certain websites 0.66
because they will remind me
of the e-retailer who
wronged me. (R)
This e-retailer’s wrongful 0.79
actions will keep me from
enjoying life. (R)
I think that many of the 0.89
emotional wounds related
to the e-retailer’s wrongful
actions will heal.
I think my life will be ruined 0.91
because of the e-retailer’s
wrongful actions. (R)
I wish for good things to 0.88
10
Constructs Items Factor Cronbach’s Composite AVE Consumer nWOM Perceived Anthropomorphism Perceived Perceived Perceived Perceived
Loading Alpha Reliability forgiveness privacy empathy ability benevolence integrity
concerns
friendly 0.77
personal 0.88
Perceived empathy The chatbot said the right 0.90 0.82 0.86 0.56 0.18 -0.11 -0.07 0.20 0.75 *
thing to make me feel better.
The chatbot responded 0.84
appropriately to my feelings
and emotions.
The chatbot came across as 0.87
empathic.
Perceived
integrity
and effectiveness as alternate mediators. However, we did not find them
0.88 *
apt mediators (Jones et al., 2022). This may happen because
technology-driven chatbots could be assumed to be effective in general.
benevolence However, post a service failure, can chatbots help customers with ser
vice recovery is more of a trust issue. Thus, chatbots need to demonstrate
Perceived
0.48
The study respondents also found the context of the scenario as one
0.28
representing service failure and recovery [M= 5.70, t(506) < 0.001].
Using the Voss et al. (2003) hedonic and utilitarian scale, we found that
participants considered earphones as more hedonic than utilitarian.
Anthropomorphism
Next, from Table 6, we can observe that the correlations of the an
tecedents and the mediators are positive and statistically significant, and
the correlations of the mediators and the two outcome variables: con
sumer forgiveness and nWOM, are in the expected directions. These
0.33
concerns
privacy
-0.26
Discriminant Validity
cial desirability bias (De Vellis, 1991; Richins & Dawson, 1992) influ
enced the present study. We observed that the calculated social
desirability had weak and insignificant correlations with the study
forgiveness
Consumer
constructs.
Next, we tested the measurement model using MPLUS 8.0. The Study
0.31
0.81
0.94
0.85
0.90
0.84
chatbot as honest.
judgment.
support of the first hypothesis and while the second and third hypoth
Items
12
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
Table 4
Results of the Structural Equation Model.
Study 1 (N = 586) Study 2 (N = 508)
H1 Perceived privacy concerns ————————— > Perceived ability -0.104 * ** -4.72 Partially supported -0.102 * ** -4.25 Partially supported
Perceived privacy concerns ————————— > Perceived benevolence -0.108 -1.40 -0.105 -1.24
Perceived privacy concerns ————————— > Perceived integrity -0.112 -1.37 -0.110 -1.39
H2 Anthropomorphism ————————— > Perceived ability 0.146 * ** 3.95 Supported 0.142 * ** 3.74 Supported
Anthropomorphism ————————— > Perceived benevolence 0.157 * ** 4.03 0.153 * ** 3.83
Anthropomorphism ————————— > Perceived integrity 0.163 * ** 3.98 0.160 * ** 3.90
H3 Perceived empathy ————————— > Perceived ability 0.201 * ** 4.28 Supported 0.195 * ** 4.06 Supported
Perceived empathy ————————— > Perceived benevolence 0.198 * ** 4.13 0.192 * ** 3.92
Perceived empathy ————————— > Perceived integrity 0.164 * ** 3.90 0.160 * ** 3.81
H4 Perceived ability ————————— > Consumer forgiveness 0.085 * ** 3.86 Supported 0.088 * ** 3.67 Supported
Perceived benevolence ————————— > Consumer forgiveness 0.133 * ** 4.29 0.136 * ** 4.12
Perceived integrity ————————— > Consumer forgiveness 0.186 * ** 4.23 0.183 * ** 3.98
H5 Perceived ability ————————— > nWOM -0.092 * ** -4.18 -0.09 * ** -3.91
Supported Supported
Perceived benevolence ————————— > nWOM -0.118 * ** -3.93 -0.115 * ** -3.97
Perceived integrity ————————— > nWOM -0.223 * ** -3.82 -0.220 * ** -3.73
R2 (i.e., squared multiple correlation) ranged R2 ranged between 0.09 and 0.58.
between 0.06 and 0.62. Fit index: Chi-square/df= 2.83;
Fit index: Chi-square/d.f. = 2.47, RMSEA RMSEA= 0.042; CFI= 0.955; TLI= 0.963.
= 0.04; CFI = 0.95; TLI = 0.96. Note: a. t-value is significant at p < 0.05 when
Note: a. t-value is significant at p < 0.05 the t-value exceeds 1.96;
when the t-value exceeds 1.96;
Table 5
Study 1- Indirect Effects Mediation Models.
Indirect effect 1 Indirect effect 2 Indirect effects 3
Finally, to test hypothesis six, we employed a strategy similar to integrity (θ = 0.0298; LCI=0.0204; UCI=0.0392) on consumer
Study 1, i.e., Hayes’s (2018) mediation procedure with a bootstrapping forgiveness were statistically significant. Also, in Column 4 of Table 8,
resample value 5000. We present the results of the mediation analyses in we observe that the estimated path coefficients for the indirect effects of
Table 8. anthropomorphism through perceived ability (θ = − 0.0127;
The estimated path coefficient for the indirect effect of perceived LCI=− 0.0178; UCI=− 0.0076), perceived benevolence (θ = − 0.0175;
privacy concerns on consumer forgiveness through perceived ability LCI=− 0.0246; UCI=− 0.0104), and perceived integrity (θ = − 0.0352;
(Column 1 of Table 8) was statistically significant (θ = − 0.0089; LCI=− 0.0481; UCI=− 0.0223) on nWOM were statistically significant.
LCI=− 0.0122; UCI=− 0.0056). Also, the estimated path coefficient for Finally, in Column 5 of Table 8, we observe that the estimated path
the indirect effect of perceived privacy concerns on nWOM through coefficients for the indirect effects of perceived empathy through
perceived ability (Column 2 of Table 8) was statistically significant perceived ability (θ = 0.0193; LCI=0.0107; UCI=0.0279), perceived
(θ = 0.0092; LCI=0.0044; UCI=0.0140). benevolence (θ = 0.0261; LCI=0.0169; UCI=0.0353), and perceived
From Column 3 of Table 8, we observe that the estimated path co integrity (θ = 0.0292; LCI=0.0190; UCI=0.0394) on consumer
efficients for the indirect effects of chatbot anthropomorphism through forgiveness were statistically significant. Similarly, in Column 6 of
perceived ability (θ = 0.0125; LCI=0.0062; UCI=0.0188), perceived Table 8, we observe that the estimated path coefficients for the indirect
benevolence (θ = 0.0208; LCI=0.0124; UCI=0.0292), and perceived effects of perceived empathy through perceived ability (θ = − 0.0175;
13
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
Table 6
Study 2- Correlation matrix and descriptive statistics.
1 2 3 4 5 6 7 8 9 10 11 12 13
1 Consumer forgiveness 1
2 nWOM -0.41 1
3 Perceived privacy concerns -0.15 0.24 1
4 Anthropomorphism 0.25 -0.17 -0.21 1
5 Perceived empathy 0.20 -0.14 -0.10 0.16 1
6 Perceived ability 0.24 -0.23 -0.23 0.22 0.32 1
7 Perceived benevolence 0.21 -0.25 -0.09 0.25 0.26 0.39 1
8 Perceived integrity 0.27 -0.22 -0.08 0.34 0.27 0.44 0.49 1
9 Ln Age 0.17 -0.20 0.13 0.1 0.24 0.09 0.06 0.12 1
10 Gender 0.18 -0.16 0.15 0.06 0.08 0.02 0.08 0.07 0.02 1
11 Education 0.09 -0.18 0.13 0.09 0.11 0.07 0.04 0.03 0.06 0.07 1
12 Dispositional anger -0.22 0.21 0.09 0.06 0.13 - 0.12 - 0.07 -0.05 -0.07 -0.04 -0.13 1
13 Dispositional compassion 0.26 -0.25 0.11 0.08 0.16 0.11 0.12 0.08 0.11 0.05 0.14 -0.23 1
Mean 5.71 5.94 5.78 5.34 5.06 5.19 5.61 4.91 3.35 0.51 0.62 4.51 4.62
S.D. 1.29 1.02 1.41 0.84 1.12 0.92 1.12 1.07 2.70 0.47 0.39 1.03 1.13
* **r > 0.15, p < 0.001; * *r = 0.12–0.14, p < 0.01, *r = 0.088–0.11, p < 0.05; #, r = 0.075–0.087, p < 0.10
Table 7
Study 2- Convergent and Discriminant Validity.
Convergent Validity Discriminant Validity
Constructs Cronbach’s Composite AVE Consumer nWOM Perceived Anthropomorphism Perceived Perceived Perceived Perceived
Alpha Reliability forgiveness privacy empathy ability benevolence integrity
concerns
Table 8
Study 2- Indirect Effects Mediation Models.
Indirect effect 1 Indirect effect 2 Indirect effects 3
14
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
LCI=− 0.0269; UCI=− 0.0081), perceived benevolence (θ = − 0.0220; transcriber to transcribe the qualitative data for further analysis. Below
LCI=− 0.0329; UCI=− 0.0111), and perceived integrity (θ = − 0.0352; we discuss some excerpts of interviews that corroborated our findings on
LCI=− 0.0494; UCI=− 0.0210) on nWOM were statistically significant. traits of chatbots resulting in consumer forgiveness and less nWOM via
the three dimensions of trustworthiness.
5.2. Discussion
Similar to Study 1, we received evidence in support of all but one 6.2. Results
hypothesis, i.e., hypothesis 1. For the first hypothesis, we received only
partial evidence supporting our hypothesis as perceived privacy concern Perceived empathy (benevolence): “I warned my friend not to order
influenced only perceived ability and not benevolence and integrity of anything from …store. When I did not receive my delivery though the
the chatbot to influence consumer forgiveness and spread negative word order status showed- "Item Delivered," I contacted the company. It was a
of mouth. text-based conversation with a chatbot. When I complained about the
problem, I was shocked to notice there were no apologies. The chatbot’s
5.3. Robustness response was to check after two days. What went wrong? Why it went
wrong? Is it just a technical glitch? Nothing was explained to me”
As a matter of robustness, we conducted another study (n = 201; (Gender=Female; Age=32 years).
Females=103; Median Income= £34,300; Median Age= 32.00 years) in Perceived empathy (ability): “I don’t think the chatbot understood my
the context of airline booking. We followed the same procedures as the concern. The product I received was damaged. The chatbot said: “It is a
main study. Further, we re-ran the same models and tested hypotheses non-returnable item; check policy.” I wanted a replacement, and the
1–6. Although the path coefficients changed, the overall statistical sig chatbot did not understand my concern. I have never observed such a
nificance of the results remained unchanged. In Appendix 2.1 and 2.2, cold response from a chat agent. This behavior is difficult to forget and
we present the robustness test results. unforgivable” (Gender=Male; Age=25 years).
We also conducted another study (n = 212; Females=208; Median Perceived empathy (integrity): "I liked the chatbot’s honesty, where it
Income= £31,700; Median Age= 29.30 years) in the e-retailing industry acknowledged the problem and mentioned delivery driver shortage as
where service recovery efforts failed, i.e., the chatbot could not resolve the reason for the undue delay in my delivery, rather than repeatedly
the issue. Though path coefficients changed, results were statistically changing my delivery dates. Mistakes happen! That is ok! I encourage
significant, i.e., traits of chatbot still influenced trustworthiness di my friends also to order from this e-retailer” (Gender=Male; Age=29
mensions in a statistically significant manner, which then enhanced years).
consumer propensity to forgive and reduced intention to spread nWOM. Anthropomorphism (ability): “It was clearly a chatbot with whom I
In Appendix 3.1 and 3.2, we present the robustness test results. was conversing. Nevertheless, I liked the fact that it resembled a human.
Though I knew it was not human, I felt confident about the bot’s com
6. Qualitative study petency to solve my problem, just like a human agent” (Gender=Female;
Age=31 years).
For emerging topics like chatbots, where only scant literature ex Anthropomorphism (benevolence): “The chatbot looked at me during
plores their effectiveness in service failure-recovery interface, a mixed- the entire chat period. I felt it was dedicated to helping me with the
methods approach has the benefits of qualitative and quantitative wrong delivery I received. I liked the compassionate behavior and
research designs (Venkatesh et al., 2016). Although we drew constructs decided not bad mouthing the retailer. After all, mistakes happen!”
and relationships between them based on extant literature, it was vital to (Gender=Female; Age=27 years).
verify if our constructs and empirically tested relationships corroborated Anthropomorphism (integrity): “When the chatbot looked at me while
with consumers’ perceptions of chatbots in the context of service failure issuing a new delivery date, I felt the chatbot is less likely to be making
and recovery (Cheng et al., 2020). Consequently, we conducted a false promises and is being honest. Though unsure if I would get my
qualitative study as well. delivery at the revised time, I still felt like believing the chatbot and less
angry after receiving the revised date” (Gender=Male; Age=26 years).
6.1. Data and method Perceived privacy concern (benevolence): "My friend told me that
chatbots analyze every text detail we write while chatting. When the
We used a convenience sampling procedure to recruit interviewees chatbot asked about my concern, it appeared more of spying and less of
through the social media platform. The users were filtered based on the concern for my problem—not forgiving this attitude" (Gender=Female;
following two criteria: a) they had a problem with services offered by an Age=34 years).
e-retailer in the past three months, and b) when they tried to contact the Perceived privacy concern (ability): "When I conversed with the chat
company, the issue was addressed by a chatbot. Based on these criteria, bot, I informed it about the product I was experiencing a problem with
29 social media users were found suitable for the study and invited for and wanted a refund. While issuing the refund, the chatbot asked if it
an interview. Nine social media users did not respond to our invitation. should refund to my card. How did the chatbot know I paid with which
We interviewed 18 UK-based participants (Females=10; Median Age= card? Did it know my card number and other details? I am curious if my
27 years; Average interview length=23 mins). Sample questions financial information is safe with this company. I am neither returning
included: a) if they have ever incurred a chance to interact with chatbots to the company which makes chatbots store my financial data nor
after service was not delivered up to their expectations, b) what attri encouraging my friends for the same. Anyways too many cyber threats
butes they liked about the chatbot, and c) were they happy with the way these days!" (Gender= Male; Age=29 years).
chatbot attempted to resolve the issue. Perceived privacy concern (integrity): “It was strange when the chatbot
We took several measures to avert the risk of any information bias asked me if I had also lost a package earlier. How is that closely related
(Chenail, 2011). First, we trained interviewers (two postgraduate re to the issue of the package I lost this time? The status shows delivered,
searchers from a UK university) to remain acquiescent to interviewees’ but actually, it was not delivered. Moreover, companies do have infor
answers, and when they received any novel opinion, the interviewers mation on such aspects in their database. Not sure if the chatbot was
asked new questions accordingly. Second, we also promised interviewees interrogating me or was honestly trying to help me with my lost pack
that their responses would remain anonymous and that information age. I have never ordered from that retailer since this incident, and
would be kept confidential (Chenail, 2011). The interviewers recorded neither have my friends based on my experience” (Gender= Female;
the interviews, and we employed the services of a professional Age=27 years).
15
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
6.3. Discussion same generated customer forgiveness. Text-based empathy can help
consumers gauge if a service provider is concerned about their affective
As can be observed from the interview excerpts, consumers empha state of mind when the customers do not receive service up to their
sized how different traits of chatbots they had previous experience with expectations (Froehle, 2006). Our findings also suggest that if chatbots
influenced their willingness to forgive the firm and not spread negative could leverage the same text-based communication principles, even they
word of mouth against the firm. For instance, when a chatbot asked if a could, similar to humans, successfully generate forgiveness from
customer had lost a package in the past, it made the customer believe the consumers.
chatbot to be less integral. This happened as customers thought about Our findings do not corroborate with studies in the service delivery
how the past package failure delivery is associated with current service context that reported anthropomorphic agents to have increased cus
failure and made them doubt the intentions of the chatbot if it was there tomers’ perception of requiring extra effort to condescend with artificial
really to help. While appearing like humans, anthropomorphic chatbots agents (Ackerman, 2016). This happened because Ackerman’s (2016)
increased customer willingness to trust the chatbot when it issued a new study’s underlying mechanism differed from the present study. The
delivery date. Thus, qualitative findings also corroborate our empirical human-like appearance of AI devices increased consumers’ discomfort
model. Thus, perceived empathy, anthropomorphism, and privacy with AI due to beliefs of humans losing their unique identity to hu
concerns with chatbots influence different dimensions of their trust manoids (Ackerman, 2016; Gursoy et al., 2019). Given that consumers’
worthiness that affects the willingness of consumers to forgive the firm response to AI agents like chatbots depends on the context when service
and not spread negative word of mouth. failure occurs, in the context of our study, consumers would prefer to
perceive interacting agents as close to humans as possible, given human
7. General discussion competency and ability to resolve problems (Teodorescu et al., 2021).
Furthermore, extant research in service recovery suggests that
Across two studies covering utilitarian and hedonic product cate chatbot acceptance depends on the type of service failure, such as pro
gories, we hypothesized the relationships between traits of chatbots cess vs. functional failure (Xing et al., 2022), giving customers a choice
(perceived privacy concerns, empathy, and anthropomorphic appear to interact with chatbot vs. humans (Huang & Dootson, 2022). Our
ance) that enhanced chatbots’ perceived trustworthiness (i.e., ability, findings shine a light on the traits of chatbots themselves rather than
integrity, and benevolence). We further hypothesized about the medi other attributes of service failure. Within the role of chatbots, our
ating role of these dimensions of trustworthiness on consumer willing findings do not agree with the findings of Mozafari et al. (2021), who
ness to forgive the firm and spread nWOM. Overall, we received suggested that chatbot disclosure reduced consumer satisfaction with
evidence supporting most hypotheses excluding H1b and H1c, i.e., the service recovery efforts due to reduced trust. However, the reason for
influence of perceived privacy concerns on chatbots’ perceived benev reduced trust was failure caused by the chat agent or severe criticality of
olence and integrity. The results remained consistent across both utili service failure. We tested our model for service failure caused by a
tarian and hedonic product categories. Thus, consumers’ perception of company, such as late delivery. During the robustness test, our model
chatbots’ traits required in helping service recovery did not vary across stood well when we tested the double service failure model, where the
product categories. Our findings regarding product categories align with chatbot could not resolve the issue. The difference may arise as we
extant research, where negative word of mouth did not vary across he explore trustworthiness and not trust. As explained in the theory section,
donic versus utilitarian product categories (Jin et al., 2023). the difference between the two may result in different customer
Our findings regarding the perceived empathy of chatbots and its outcomes.
influence on the perceived ability of chatbots to help with service re
covery imply that even if chatbots are machines with no emotions, they 7.1. Theoretical contributions
can still exert similar types of social influence as humans with the choice
of words and phrases they use during service recovery efforts. These By exploring the human-like traits of chatbots, we contribute to in
words’ choices can make chatbots perceived as able to solve customer formation systems and marketing literature. First, the present study
problems through service recovery. Similarly, perceived empathy also leveraging CASA theory enhances understanding of how customers
increased the benevolence and integrity of empathetic chatbots. communicate with and experience quiescent agents like chatbots. In the
The second subset of the third hypothesis suggested that the online scenario, chatbots could emulate human behavior and persuade
perceived empathy of chatbots enhances the perceived benevolence of customers about their interactivity with humans (Blut et al., 2021).
chatbots. The path coefficient was positive and statistically significant. Brands can make chatbots achieve this imitation by making chatbots
Thus, the perceived empathy of chatbots made them appear more anthropomorphic, i.e., where customers perceive service chatbots as
benevolent to customers. Thus, the linguistic attributes of chatbots did resembling humans. While researchers have found anthropomorphism
influence the level to which they were perceived to have honest in to enhance product and brand liking in the marketing literature (Blut
tentions of trying to resolve customers’ issues. et al., 2021), we have limited knowledge of whether the anthropomor
For anthropomorphic chatbots also, our findings suggest that human- phism of chatbots in a service failure context can influence consumer
like appearance made consumers perceive chatbots as more integral, decision to forgive the firm for service failure, once chatbots introduce
able, and benevolent. The findings are in congruence with CASA theory, recovery efforts. We thus extend the anthropomorphism literature of
where humans could also perceive machines as social agents, and marketing to the information systems literature on AI-driven chatbots.
anthropomorphism is likely to increase this social attribute of chatbots. Second, we also shine a light on the limitations of CASA theory,
Information systems and marketing fields (Luo et al., 2019; Mur where even after perceiving chatbots as social agents owing to their
tarelli et al., 2021) have shown an increasing interest in exploring the anthropomorphic appearance and empathetic communication, con
effectiveness of AI-based chatbots in improving service quality. Our sumers are also aware that they will lose control over information if
findings are consistent with past service marketing literature that sug shared with chatbots, and this concern with privacy of information de
gested customers are more likely to forgive a firm for service failure if creases trustworthiness of chatbots in the context of service failure.
the firm’s perceived trustworthiness is high (Gannon et al., 2022; Xie & Although extant studies suggest that anthropomorphic chatbots could
Peng, 2008). The present study’s findings also corroborate previous reduce privacy concerns, such concerns do remain and could adversely
findings on the significance of text messages that successfully generated influence service recovery efforts through chatbots (Ischen et al., 2020).
customer forgiveness after a service failure. Li and Wang (2023) re Third, we examine the mediating mechanisms between service
ported that when customer representatives conversed with customers on chatbot traits and customer willingness to forgive and reduce nWOM in
social media and their communication styles depicted empathy, the a service failure context. Considering mediators is significant as it assists
16
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
17
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
18
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
scholars in avoiding overestimating or underestimating the significance increase skepticism among consumers regarding the ability of chatbots
of technology traits (Iyer et al., 2020; Tsai et al., 2021). The literature to resolve issues as efficiently as humans can. As our study suggests,
does not explore in detail the role of mediators. Where one stream of anthropomorphism increases consumers’ aptness to trust a chatbot for
literature in marketing explores relational mediators, such as trust and its service recovery efforts. Thus, firms should consider adopting
satisfaction (Verma et al., 2016), another stream considers technology anthropomorphic chatbots.
attributes from information systems literature. However, some studies Overall, our findings affect how service providers and marketers
do not consider mediators (Wirtz et al., 2018). We add to these under should design customer service chatbots. The critical determinant of
lying mechanisms of chatbot literature by exploring the role of their users’ perceived trustworthiness of chatbots for forgiveness and reduced
trustworthiness. nWOM after service failures is the humanness of chatbots and their
Fourth, marketing literature suggests several strategies to respond to empathetic attitude apart from perceived security. By exploring traits of
service failure for effective customer emotion management through chatbots in computer-mediated communication, we suggest that chat
actions such as quick acts by management (Tax & Brown, 1998), bots must show empathy toward customers when they are text-based
rendering explanation (Liao, 2007), fair reception (Maxham & Nete and make customers realize how the firm acknowledges the suffering
meyer, 2002), effective complaint management procedures (Smith et al., or pain customer might have gone through when service delivery failed.
1999), and empowering employees to make decisions (Tax & Brown, Similarly, chatbots should depict anthropomorphic traits with human-
1998). However, the role of AI in SFR efforts is only scantly known. For like appearance. To reduce perceived risk and security concerns, firms
consumer outcomes also, while extant studies have explored customer should reduce their system vulnerabilities so that an attacker does not
coping methods following service failure (Bose & Ye, 2015; Chen et al., exploit them.
2021; Duhachek, 2005; Gelbrich, 2010), consumer forgiveness as a Finally, firms should try to make their data usage policy more
coping strategy has not been explored in-depth in the service settings, explicit to customers so that consumers are aware of how their data can
especially not in human-technology interface literature (Tsarenko & be used and feel more comfortable in sharing their personal details,
Tojib, 2011). purchase and payment histories, etc., in case of service failure (R. Hasan
et al., 2021; M.K. Hasan et al., 2021).
7.2. Implications for practice
7.3. Limitations and directions for future research
Understanding the antecedents of chatbot trustworthiness and its
influence on consumer forgiveness for service failure can help marketers We did not consider any contingency conditions under whose impact
and programmers adjust the chatbot systems’ design. According to the effect of chatbot traits on trustworthiness or consumer forgiveness
Invesp, a North American consulting firm specializing in conversion rate could be attenuated or reduced. Future studies may also benefit from
optimization, word-of-mouth marketing impacts USD 6 trillion of cross-cultural comparisons for chatbot traits. Mechanically intelligent
annual consumer spending. Thirteen percent of consumer spending de chatbots, such as call center agents, can provide scripted responses to
cisions depend on feedback consumers get about service providers simple customer issues. Analytically intelligent AI-driven chatbots
(O’Neill, 2022). In this context, nWOM can be detrimental to the busi analyze customer problems (Huang & Rust, 2018), and intuitively
ness. Service failure increases the chances of nWOM. Our model helps intelligent AI-driven chatbots can understand customers’ complaints, i.
managers to limit the chances of nWOM after the occurrence of service e., can understand human emotions (Huang & Rust, 2018). Thus,
failure. Managers have primarily believed that consumer willingness to AI-driven chatbots are of multiple types, exhibit several aspects of
forgive or not after service failure depends on the personality traits of a human intelligence, and firms increasingly employ them in consumer
consumer, such as how empathetic they are (Wei et al., 2022) or their service (Hwang et al., 2019). Their traits for trustworthiness may vary,
spirituality (Tsarenko & Tojib, 2012), among other personality traits. and future research could explore this aspect. In the present study, we do
However, our findings suggest consumer forgiveness depends on man not effectively answer the question of what traits of AI-driven chatbots
agers’ trustworthy actions. Thus, firms using chatbots need to ensure can help in service recovery after a service failure is caused by AI (Lu
that customers perceive these chatbots to be trustworthy. Once cus et al., 2020), and research may explore this further in future research.
tomers consider chatbots trustworthy as they make service recovery We also considered a service failure scenario where service recovery
efforts, they are less likely to spread nWOM against the company and efforts were successful, i.e., the chatbot could resolve the query (in
more willing to forgive it. robustness study 2). However, double service failure is also possible, i.e.,
Firms must design text-based chatbots that can converse effectively where a chatbot cannot resolve the query (Zou & Migacz, 2022). Though
through text and depict empathy even through nonverbal modes. we consider this scenario in a robustness test, future studies could
Chatbots must use expressions like “I am very sorry to know this!” explore the issue in detail.
Nelson Mandela once said, “If you talk to a man in a language he un In our theorization, we explain that trustworthiness rests on con
derstands, that goes to his head. If you talk to him in his language, that sumers’ perception of efforts taken by the agent to resolve the issue
goes to his heart.” Humans generally use exclamatory marks to reflect rather than the actual resolution. So even if service recovery efforts fail,
empathy. If a firm intends for customers to forgive them for service the chatbot’s trustworthiness in terms of benevolence and integrity, and
failure, it must follow human-based communication principles while ability should not change, and our robustness study reports the same.
deploying chatbots. Thus, words like “I am sorry” or linguistic style However, a detailed theoretical and empirical investigation would be
leveraging exclamations can make chatbots mimic human-like conver beneficial.
sations. Thus, when chatbots, through texts, signal that they are being Finally, we considered a mixed-method approach in the present
empathetic about customer loss incurred due to service failure, the study. However, a single case study for service failure and recovery
customer’s propensity to forgive the firm is likely to increase. leveraging a grounded theory approach to uncover the role of chatbots
Our study also found that the anthropomorphic appearance of in service recovery could also be helpful, as it gives a detailed version of
chatbots enhanced customers’ likelihood to forgive the firm and lowered an individual’s experience.
the probability of spreading nWOM against the company. This happened
as human-like appearance enabled consumers to perceive chatbots as 8. Conclusion
possessing human-like qualities. Although consumers’ propensity to
converse with human-like or machine-like agents varies with context, To conclude the study, as predicted by CASA theory, chatbots can be
when service failure occurs, consumers display a proclivity to share their effective in conducting service recovery efforts both across utilitarian
concerns with humans. Interacting with machine-like agents may and hedonic product categories. If chatbots are anthropomorphic,
19
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
empathetic and pose less privacy concerns, then these attributes of Investigation, Formal analysis, Writing – review & editing, Supervision,
chatbots make them appear trustworthy through increased perceived Saurabh Bhattacharya: Methodology, Software, Validation, Data
ability, benevolence and integrity. The perceived trustworthiness of curation, Formal analysis, Visualization, Investigation, Supervision,
chatbots then increases consumers propensity to forgive the firm for Writing – review & editing.
service failure, and also spread negative word of mouth.
H1 Perceived privacy concerns ————————— > Perceived ability -0.121 * ** -3.75 Partially supported
Perceived privacy concerns ————————— > Perceived benevolence -0.005 -1.48
Perceived privacy concerns ————————— > Perceived integrity -0.106 -1.24
H2 Anthropomorphism ————————— > Perceived ability 0.132 * ** 3.56 Supported
Anthropomorphism ————————— > Perceived benevolence 0.124 * ** 3.95
Anthropomorphism ————————— > Perceived integrity 0.141 * ** 4.01
H3 Perceived empathy ————————— > Perceived ability 0.155 * ** 3.78 Supported
Perceived empathy ————————— > Perceived benevolence 0.137 * ** 4.05
Perceived empathy ————————— > Perceived integrity 0.154 * ** 3.98
H4 Perceived ability ————————— > Consumer forgiveness 0.102 * ** 3.55 Supported
Perceived benevolence ————————— > Consumer forgiveness 0.102 * ** 4.26
Perceived integrity ————————— > Consumer forgiveness 0.144 * ** 4.13
H5 Perceived ability ————————— > nWOM -0.083 * ** -4.25
Perceived benevolence ————————— > nWOM -0.103 * ** -3.64 Supported
Perceived integrity ————————— > nWOM -0.215 * ** -3.59
20
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
(continued )
Indirect effect 4 Indirect effects 5 Indirect effects 6
H1 Perceived privacy concerns ————————— > Perceived ability -0.119 * ** -3.99 Partially supported
Perceived privacy concerns ————————— > Perceived benevolence -0.018 -1.06
Perceived privacy concerns ————————— > Perceived integrity -0.097 -1.19
H2 Anthropomorphism ————————— > Perceived ability 0.147 * ** 4.04 Supported
Anthropomorphism ————————— > Perceived benevolence 0.106 * ** 3.71
Anthropomorphism ————————— > Perceived integrity 0.139 * ** 3.59
H3 Perceived empathy ————————— > Perceived ability 0.111 * ** 3.54 Supported
Perceived empathy ————————— > Perceived benevolence 0.132 * ** 3.81
Perceived empathy ————————— > Perceived integrity 0.118 * ** 3.64
H4 Perceived ability ————————— > Consumer forgiveness 0.126 * ** 3.77 Supported
Perceived benevolence ————————— > Consumer forgiveness 0.122 * ** 3.71
Perceived integrity ————————— > Consumer forgiveness 0.102 * ** 3.62
H5 Perceived ability ————————— > nWOM -0.091 * ** -3.82
Perceived benevolence ————————— > nWOM -0.075 * ** -3.66 Supported
Perceived integrity ————————— > nWOM -0.158 * ** -3.94
References Adam, M., Wessel, M., & Benlian, A. (2021). AI-based chatbots in customer service and
their effects on user compliance. Electronic Markets, 31, 427–445. https://doi.org/
10.1007/s12525-020-00414-7
Ackerman, E. (2016). Study: Nobody wants social robots that look like humans because
Akter, S., D’Ambra, J., & Ray, P. (2011). Trustworthiness in mHealth information
they threaten our identity. IEEE Spectrum, 1–5.
services: An assessment of a hierarchical model with mediating and moderating
effects using partial least squares (PLS. Journal of the American Society for Information
Science and Technology, 62(1), 100–116. https://doi.org/10.1002/asi.21442
21
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
Al-Gahtani, S. S. (2011). Modeling the electronic transactions acceptance using an Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of
extended technology acceptance model. Applied Computing and Informatics, 9(1), psychopathology. Journal of Consulting Psychology, 24(4), 349–354. https://psycnet.
47–77. https://doi.org/10.1016/j.aci.2009.04.001 apa.org/doi/10.1037/h0047358.
Ameen, N., Cheah, J. H., & Kumar, S. (2022). It’s all part of the customer journey: The Dao, H. M., & Theotokis, A. (2021). Self-service technology recovery: The effect of
impact of augmented reality, chatbots, and social media on the body image and self- recovery initiation and locus of responsibility. Journal of Interactive Marketing, 54,
esteem of Generation Z female consumers. Psychology & Marketing, 39(11), 25–39. https://doi.org/10.1016/j.intmar.2020.09.001
2110–2129. https://doi.org/10.1002/mar.21715 De Vellis, R. F. (1991). Scale Development: Theory and Application. Thousand Oaks, CA:
Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic SAGE.
design cues and communicative agency framing on conversational agent and DeWitt, T., & Brady, M. K. (2003). Rethinking service recovery strategies: The effect of
company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/ rapport on consumer responses to service failure. Journal of Service Research, 6(2),
10.1016/j.chb.2018.03.051 193–207. https://doi.org/10.1177%2F1094670503257048.
Asgari, P., & Roshani, K. (2013). Validation of forgiveness scale and a survey on the Dong, B., Sivakumar, K., Evans, K. R., & Zou, S. (2016). Recovering coproduced service
relationship of forgiveness and students’ mental health. International Journal of failures: Antecedents, consequences, and moderators of locus of recovery. Journal of
Psychology and Behavioral Research, 2(2), 109–115. Service Research, 19(3), 291–306. https://doi.org/10.1177%2F1094670516630624.
Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot: Modeling the Dörnyei, K. R., & Lunardo, R. (2021). When limited edition packages backfire: The role of
determinants of users’ satisfaction and continuance intention of AI-powered service emotional value, typicality and need for uniqueness. Journal of Business Research,
agents. Telematics and Informatics, 54, Article 101473. https://doi.org/10.1016/j. 137, 233–243. https://doi.org/10.1016/j.jbusres.2021.08.037
tele.2020.101473 Duhachek, A. (2005). Coping: A multidimensional, hierarchical framework of responses
Babin, B. J., Zhuang, W., & Borges, A. (2021). Managing service recovery experience: to stressful consumption episodes. Journal of Consumer Research, 32(1), 41–53.
Effects of the forgiveness for older consumers. Journal of Retailing and Consumer https://doi.org/10.1086/426612
Services, 58, Article 102222. https://doi.org/10.1016/j.jretconser.2020.102222 Dwivedi, Y. K., & Wang, Y. (2022). Guest editorial: Artificial intelligence for B2B
Baek, T. H., & Morimoto, M. (2012). Stay away from me. Journal of Advertising, 41(1), marketing: Challenges and opportunities. Industrial Marketing Management, 105,
59–76. https://doi.org/10.2753/JOA0091-3367410105 109–113. https://doi.org/10.1016/j.indmarman.2022.06.001
Baliga, A. J., Chawla, V., Ganesh, L. S., & Sivakumaran, B. (2021). Service failure and Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three-factor theory of
recovery in B2B markets–A morphological analysis. Journal of Business Research, 131, anthropomorphism. Psychological Review, 114(4), 864–886. https://psycnet.apa.org/
763–781. https://doi.org/10.1016/j.jbusres.2020.09.025 doi/10.1037/0033-295X.114.4.864.
Bell, C. R., & Zemke, R. E. (1987). Service breakdown: the road to recovery. Management Eren, B. A. (2021). Determinants of customer satisfaction in chatbot use: Evidence from a
Review, 76(10), 32–35. banking application in Turkey. International Journal of Bank Marketing, 39(2),
Bickmore, T., & Schulman, D. (2007). Practical approaches to comforting users with 294–311. https://doi.org/10.1108/IJBM-02-2020-0056
relational agents. CHI’07 extended abstracts on human factors in computing systems, Fan, H., Han, B., & Gao, W. (2022). Im)Balanced customer-oriented behaviors and AI
CHI EA’07 (pp. 2291–2296). New York, NY, USA: Association for Computing chatbots’ Efficiency–Flexibility performance: The moderating role of customers’
Machinery. rational choices. Journal of Retailing and Consumer Services, 66, Article 102937.
Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding https://doi.org/10.1016/j.jretconser.2022.102937
anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, Fehr, R., Gelfand, M. J., & Nag, M. (2010). The road to forgiveness: a meta-analytic
and other AI. Journal of the Academy of Marketing Science, 49(4), 632–658. https:// synthesis of its situational and dispositional correlates. Psychological Bulletin, 136(5),
doi.org/10.1007/s11747-020-00762-y 894–914. https://psycnet.apa.org/doi/10.1037/a0019993.
Bose, M., & Ye, L. (2015). A cross-cultural exploration of situated learning and coping. Fetscherin, M., & Sampedro, A. (2019). Brand forgiveness. Journal of Product & Brand
Journal of Retailing and Consumer Services, 24, 42–50. https://doi.org/10.1016/j. Management, 28(5), 633–652. https://doi.org/10.1108/JPBM-04-2018-1845
jretconser.2015.01.010 Fitzsimmons-Craft, E. E., Chan, W. W., Smith, A. C., Firebaugh, M. L., Fowler, L. A.,
Brooks, A. W., Dai, H., & Schweitzer, M. E. (2014). I’m sorry about the rain! Superfluous Topooco, N., & Jacobson, N. C. (2022). Effectiveness of a chatbot for eating disorders
apologies demonstrate empathic concern and increase trust. Social Psychological and prevention: A randomized clinical trial. International Journal of Eating Disorders, 55
Personality Science, 5(4), 467–474. https://doi.org/10.1177/1948550613506122 (3), 343–353. https://doi.org/10.1002/eat.23662
Casidy, R., & Shin, H. (2015). The effects of harm directions and service recovery Folkman, S., & Moskowitz, J. T. (2000). Stress, positive emotion, and coping. Current
strategies on customer forgiveness and negative word-of-mouth intentions. Journal of Directions in Psychological Science, 9(4), 115–118. https://doi.org/10.1111/1467-
Retailing and Consumer Services, 27, 103–112. https://doi.org/10.1016/j. 8721.00073
jretconser.2015.07.012 Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with
Castillo, D., Canhoto, A. I., & Said, E. (2021). The dark side of AI-powered service unobservable variables and measurement error. Journal of Marketing Research, 18(1),
interactions: Exploring the process of co-destruction from the customer perspective. 39–50. https://doi.org/10.1177%2F002224378101800104.
The Service Industries Journal, 41(13–14), 900–925. https://doi.org/10.1080/ Froehle, C. M. (2006). Service personnel, technology, and their interaction in influencing
02642069.2020.1787993 customer satisfaction. Decision Sciences, 37(1), 5–38. https://doi.org/10.1111/
Chen, N., Mohanty, S., Jiao, J., & Fan, X. (2021). To err is human: Tolerate humans j.1540-5414.2006.00108.x
instead of machines in service failure. Journal of Retailing and Consumer Services, 59, Furner, C. P., Drake, J. R., Zinko, R., & Kisling, E. (2022). Online review antecedents of
Article 102363. https://doi.org/10.1016/j.jretconser.2020.102363 trust, purchase, and recommendation intention: A simulation-based experiment for
Chenail, R. J. (2011). Interviewing the investigator: Strategies for addressing hotels and AirBnBs. Journal of Internet Commerce, 21(1), 79–103. https://doi.org/
instrumentation and researcher bias concerns in qualitative research. Qualitative 10.1080/15332861.2020.1870342
Report, 16(1), 255–262. Gambetti, E., & Giusberti, F. (2009). Dispositional anger and risk decision-making. Mind
Cheng, J. W., & Mitomo, H. (2017). The underlying factors of the perceived usefulness of & Society, 8(1), 7–20. https://doi.org/10.1007/s11299-008-0052-z
using smart wearable devices for disaster applications. Telematics and Informatics, 34 Gannon, M., Taheri, B., Thompson, J., Rahimi, R., & Okumus, B. (2022). Investigating
(2), 528–539. https://doi.org/10.1016/j.tele.2016.09.010 the effects of service recovery strategies on consumer forgiveness and post-trust in
Cheng, L., Craighead, C. W., Wang, Q., & Li, J. J. (2020). When is the supplier’s message the food delivery sector. International Journal of Hospitality Management, 107, Article
“loud and clear”? Mixed signals from supplier-induced disruptions and the response. 103341. https://doi.org/10.1016/j.ijhm.2022.103341
Decision Sciences, 51(2), 216–254. https://doi.org/10.1111/deci.12412 Gaudine, A., & Thorne, L. (2001). Emotion and ethical decision-making in organizations.
Cheng, S. Y., White, T. B., & Chaplin, L. N. (2012). The effects of self-brand connections Journal of Business Ethics, 31, 175–187. https://doi.org/10.1023/A:1010711413444
on responses to brand failure: A new look at the consumer–brand relationship. Gelbrich, K. (2010). Anger, frustration, and helplessness after service failure: Coping
Journal of Consumer Psychology, 22(2), 280–288. https://doi.org/10.1016/j. strategies and effective informational support. Journal of the Academy of Marketing
jcps.2011.05.005 Science, 38(5), 567–585. https://doi.org/10.1007/s11747-009-0169-6
Cheng, Y., & Jiang, H. (2022). Customer–brand relationship in the era of artificial Giebelhausen, M., Robinson, S. G., Sirianni, N. J., & Brady, M. K. (2014). Touch versus
intelligence: Understanding the role of chatbot marketing efforts. Journal of Product tech: When technology functions as a barrier or a benefit to service encounters.
& Brand Management, 31(2), 252–264. https://doi.org/10.1108/JPBM-05-2020- Journal of Marketing, 78(4), 113–124. https://doi.org/10.1509%2Fjm.13.0056.
2907 Gilchrist, K. (May 9, 2017). Chatbots expected to cut business costs by $8 billion by
Choi, B., & Choi, B. J. (2014). The effects of perceived service recovery justice on 2022. CNBC. https://www.cnbc.com/2017/05/09/chatbots-expected-to-cut-busine
customer affection, loyalty, and word-of-mouth. European Journal of Marketing, 48 ss-costs-by-8-billion-by-2022.html.
(1/2), 108–131. https://doi.org/10.1108/EJM-06-2011-0299 Grégoire, Y., Tripp, T. M., & Legoux, R. (2009). When customer love turns into lasting
Collins, C., Dennehy, D., Conboy, K., & Mikalef, P. (2021). Artificial intelligence in hate: The effects of relationship strength and time on customer revenge and
information systems research: A systematic literature review and research agenda. avoidance. Journal of Marketing, 73(6), 18–32. https://doi.org/10.1509%
International Journal of Information Management, 60, Article 102383. https://doi.org/ 2Fjmkg.73.6.18.
10.1016/j.ijinfomgt.2021.102383 Grégoire Y., Van Vaerenbergh Y., Orsingher C., & Gelbrich K. (2022). CfP JSR: Smart
Croes, E. A., & Antheunis, M. L. (2021). Can we be friends with Mitsuku? A longitudinal service failure-recovery. Journal of Service Research. https://www.servsig.org/wor
study on the process of relationship formation between humans and a social chatbot. dpress/2022/08/cfp-jsr-smart-service-failure-recovery/.
Journal of Social and Personal Relationships, 38(1), 279–300. https://doi.org/ Gronroos, C. (1988). Service quality: The six criteria of good perceived service quality.
10.1177%2F0265407520959463. Review of Business, 9(winter), 10–13.
Crolic, C., Thomaz, F., Hadi, R., & Stephen, A. T. (2022). Blame the bot: Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially
Anthropomorphism and anger in customer–chatbot interactions. Journal of intelligent (AI) device use in service delivery. International Journal of Information
Marketing, 86(1), 132–148. https://doi.org/10.1177/00222429211045687 Management, 49, 157–169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008
22
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
Hallikainen, H., Hirvonen, S., & Laukkanen, T. (2020). Perceived trustworthiness in International Journal of Research in Marketing, 39(2), 522–540. https://doi.org/
using B2B digital services. Industrial Management & Data Systems, 120(3), 587–607. 10.1016/j.ijresmar.2021.11.002
https://doi.org/10.1108/IMDS-04-2019-0212 Kosiba, J. P. B., Boateng, H., Okoe Amartey, A. F., Boakye, R. O., & Hinson, R. (2018).
Harrison-Walker, L. J. (2001). E-complaining: A content analysis of an Internet Examining customer engagement and brand loyalty in retail banking: The
complaint forum. Journal of Services Marketing, 15(5), 397–412. https://doi.org/ trustworthiness influence. International Journal of Retail & Distribution Management,
10.1108/EUM0000000005657 46(8), 764–779. https://doi.org/10.1108/IJRDM-08-2017-0163
Harrison-Walker, L. J. (2019). The critical role of customer forgiveness in successful Lankton, N. K., McKnight, D. H., & Tripp, J. (2015). Technology, humanness, and trust:
service recovery. Journal of Business Research, 95, 376–391. https://doi.org/ Rethinking trust in technology. Journal of the Association for Information Systems, 16
10.1016/j.jbusres.2018.07.049 (10). https://aisel.aisnet.org/jais/vol16/iss10/1.
Hasan, M. K., Kamil, S., Shafiq, M., Yuvaraj, S., Kumar, E. S., Vincent, R., & Nafi, N. S. Lauer, T., & Deng, X. (2007). Building online trust through privacy practices.
(2021). An improved watermarking algorithm for robustness and imperceptibility of International Journal of Information Security, 6, 323–331. https://doi.org/10.1007/
data protection in the perception layer of internet of things. Pattern Recognition s10207-007-0028-8
Letters, 152, 283–294. https://doi.org/10.1016/j.patrec.2021.10.032 Lee, S., & Choi, J. (2017). Enhancing user experience with conversational agent for
Hasan, R., Shams, R., & Rahman, M. (2021). Consumer trust and perceived risk for voice- movie recommendation: Effects of self-disclosure and reciprocity. International
controlled artificial intelligence: The case of Siri. Journal of Business Research, 131, Journal of Human-Computer Studies, 103, 1–35. https://doi.org/10.1016/j.
591–597. https://doi.org/10.1016/j.jbusres.2020.12.012 ijhcs.2017.02.005
Haugeland, I. K. F., Følstad, A., Taylor, C., & Bjørkli, C. A. (2022). Understanding the user Li, M., & Wang, R. (2023). Chatbots in e-commerce: The effect of chatbot language style
experience of customer service chatbots: An experimental study of chatbot on customers’ continuance usage intention and attitude toward brand. Journal of
interaction design. International Journal of Human-Computer Studies, 161, Article Retailing and Consumer Services, 71, Article 103209. https://doi.org/10.1016/j.
102788. https://doi.org/10.1016/j.ijhcs.2022.102788 jretconser.2022.103209
Hayes, A. F. (2018). Introduction to Mediation, Moderation, and Conditional Process Liao, H. (2007). Do it right this time: The role of employee service recovery performance
Analysis: A Regression-Based Approach. New York, NY: Guilford Publications. in customer-perceived justice and customer loyalty after service failures. Journal of
Hinds, J., Williams, E. J., & Joinson, A. N. (2020). “It wouldn’t happen to me”: Privacy Applied Psychology, 92(2), 475–489. https://doi.org/10.1037/0021-9010.92.2.475
concerns and perspectives following the Cambridge Analytica scandal. International Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in
Journal of Human-Computer Studies, 143, Article 102498. https://doi.org/10.1016/j. cross-sectional research designs. Journal of Applied Psychology, 86(1), 114–121. https
ijhcs.2020.102498 ://psycnet.apa.org/doi/10.1037/0021-9010.86.1.114.
Ho-Dac, N. N., Carson, S. J., & Moore, W. L. (2013). The effects of positive and negative Liu, R., Gupta, S., & Patel, P. (2021). The application of the principles of responsible AI
online customer reviews: do brand strength and category maturity matter. Journal of on social media marketing for digital health. Information Systems Frontiers Advance
Marketing, 77(6), 37–53. https://doi.org/10.1509%2Fjm.11.0011. Online Publication. https://doi.org/10.1007/s10796-021-10191-z
Hoffman, K. Douglas, & Bateson, John E. G. (1997). Essentials of Services Marketing. TX: Lu, V. N., Wirtz, J., Kunz, W. H., Paluch, S., Gruber, T., Martins, A., & Patterson, P. G.
Dryden: Fort Worth. (2020). Service robots, customers and service employees: what can we learn from
Hong, I. B., & Cho, H. (2011). The impact of consumer trust on attitudinal loyalty and the academic literature and where are the gaps. Journal of Service Theory and
purchase intentions in B2C e-marketplaces: Intermediary trust vs. seller trust. Practice, 30(3), 361–391. https://doi.org/10.1108/JSTP-04-2019-0088
International Journal of Information Management, 31(5), 469–479. https://doi.org/ Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. humans: The impact
10.1016/j.ijinfomgt.2011.02.001 of artificial intelligence chatbot disclosure on customer purchases. Marketing Science,
Huang, D. H., & Chueh, H. E. (2021). Chatbot usage intention analysis: Veterinary 38(6), 937–947. https://doi.org/10.1287/mksc.2019.1192
consultation. Journal of Innovation & Knowledge, 6(3), 135–144. https://doi.org/ Lutz, C., & Tamó-Larrieux, A. (2020). The robot privacy paradox: Understanding how
10.1016/j.jik.2020.09.002 privacy concerns shape intentions to use social robots. Human-Machine
Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Communication, 1, 87–111. https://search.informit.org/doi/10.3316/INFORMIT.
Research, 21(2), 155–172. https://doi.org/10.1177%2F1094670517752459. 097053479720281.
Huang, Y. S. S., & Dootson, P. (2022). Chatbots and service failure: When does it lead to Machneva, M., Evans, A. M., & Stavrova, O. (2022). Consensus and (lack of) accuracy in
customer aggression. Journal of Retailing and Consumer Services, 68, Article 103044. perceptions of avatar trustworthiness. Computers in Human Behavior, 126, Article
https://doi.org/10.1016/j.jretconser.2022.103044 107017. https://doi.org/10.1016/j.chb.2021.107017
Hwang, S., Kim, B., & Lee, K. (2019). A data-driven design framework for customer Malhotra, N. K., Kim, S. S., & Patil, A. (2006). Common method variance in IS research: A
service chatbot (July). International Conference on Human-Computer Interaction (pp. comparison of alternative approaches and a reanalysis of past research. Management
222–236). Cham: Springer, (July). Science, 52(12), 1865–1883. https://doi.org/10.1287/mnsc.1060.0597
Ischen, C., Araujo, T., Voorveld, H., van Noort, G., & Smit, E. (2020). Privacy concerns in Martin, K. D., Borah, A., & Palmatier, R. W. (2017). Data privacy: Effects on customer
chatbot interactions. Chatbot Research and Design: Third International Workshop, and firm performance. Journal of Marketing, 81(1), 36–58. https://doi.org/10.1509/
CONVERSATIONS 2019, Amsterdam, The Netherlands, November 19–20, 2019, Revised jm.15.0497
Selected Papers 3 (pp. 34–48). Springer International Publishing,. Maxham, J. G., III, & Netemeyer, R. G. (2002). A longitudinal study of complaining
Iyer, G. R., Blut, M., Xiao, S. H., & Grewal, D. (2020). Impulse buying: a meta-analytic customers’ evaluations of multiple service failures and recovery efforts. Journal of
review. Journal of the Academy of Marketing Science, 48(3), 384–404. https://doi.org/ Marketing, 66(4), 57–71. https://doi.org/10.1509%2Fjmkg.66.4.57.18512.
10.1007/s11747-019-00670-w McCullough, M. E., Fincham, F. D., & Tsang, J. A. (2003). Forgiveness, forbearance, and
Jin, Y. H., Ueltschy Murfield, M. L., & Bock, D. E. (2023). Do as you say, or I will: Retail time: the temporal unfolding of transgression-related interpersonal motivations.
signal congruency in buy-online-pickup-in-store and negative word-of-mouth. Journal of Personality and Social Psychology, 84(3), 540. https://psycnet.apa.org/d
Journal of Business Logistics, 44(1), 37–60. https://doi.org/10.1111/jbl.12322 oi/10.1037/0022-3514.84.3.540.
Johnson, D. S. (2007). Achieving customer value from electronic channels through McKnight, D. H. (2005). Trust in information technology. In G. B. Davis (Ed.), The
identity commitment, calculative commitment, and trust in technology. Journal of Blackwell Encyclopedia of Management (Vol. 7, pp. 329–331). Maiden, MA: Blackwell.
Interactive Marketing, 21(4), 2–22. https://doi.org/10.1002/dir.20091 Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific
Joireman, J., Grégoire, Y., & Tripp, T. M. (2016). Customer forgiveness following service technology: An investigation of its components and measures. ACM Transactions on
failures. Current Opinion in Psychology, 10, 76–82. https://doi.org/10.1016/j. Management Information Systems (TMIS), 2(2), 1–25. https://doi.org/10.1145/
copsyc.2015.11.005 1985347.1985353
Jones, C. L. E., Hancock, T., Kazandjian, B., & Voorhees, C. M. (2022). Engaging the Migacz, S. J., Zou, S., & Petrick, J. F. (2018). The “terminal” effects of service failure on
avatar: The effects of authenticity signals during chat-based service recoveries. airlines: Examining service recovery with justice theory. Journal of Travel Research,
Journal of Business Research, 144, 703–716. https://doi.org/10.1016/j. 57(1), 83–98. https://psycnet.apa.org/doi/10.1177/0047287516684979.
jbusres.2022.01.012 Mostafa, R. B., & Kasamani, T. (2022). Antecedents and consequences of chatbot initial
Karakas, F., & Sarigollu, E. (2013). The role of leadership in creating virtuous and trust. European Journal of Marketing, 56(6), 1748–1771. https://doi.org/10.1108/
compassionate organizations: Narratives of benevolent leadership in an Anatolian EJM-02-2020-0084
tiger. Journal of Business Ethics, 113, 663–678. https://doi.org/10.1007/s10551-013- Mozafari, N., Weiger, W. H., & Hammerschmidt, M. (2021). Trust me, I’m a
1691-5 bot–repercussions of chatbot disclosure in different service frontline settings. Journal
Khamitov, M., Grégoire, Y., & Suri, A. (2020). A systematic review of brand of Service Management, 33(2), 221–245. https://doi.org/10.1108/JOSM-10-2020-
transgression, service failure recovery and product-harm crisis: integration and 0380
guiding insights. Journal of the Academy of Marketing Science, 48, 519–542. https:// Murtarelli, G., Gregory, A., & Romenti, S. (2021). A conversation-based perspective for
doi.org/10.1007/s11747-019-00679-1 shaping ethical human–machine interactions: The particular challenge of chatbots.
Kim, Y., & Sundar, S. S. (2012). Anthropomorphism of computers: Is it mindful or Journal of Business Research, 129, 927–935. https://doi.org/10.1016/j.
mindless. Computers in Human Behavior, 28(1), 241e250. https://doi.org/10.1016/j. jbusres.2020.09.018
chb.2011.09.006 Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers.
Knox, G., & Van Oest, R. (2014). Customer complaints and recovery effectiveness: a Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
customer base approach. Journal of Marketing, 78, 42–57. https://doi.org/10.1509% Nass, C., & Steuer, J. (1993). Voices, boxes, and sources of messages: Computers and
2Fjm.12.0317. social actors. Human Communication Research, 19(4), 504–527. https://doi.org/
Konya-Baumbach, E., Biller, M., & von Janda, S. (2023). Someone out there? A study on 10.1111/j.1468-2958.1993.tb00311.x
the social presence of anthropomorphized chatbots. Computers in Human Behavior, Nguyen, Q. N., Sidorova, A., & Torres, R. (2022). User interactions with chatbot
139, Article 107513. https://doi.org/10.1016/j.chb.2022.107513 interfaces vs. Menu-based interfaces: An empirical study. Computers in Human
Kopalle, P. K., Gangwar, M., Kaplan, A., Ramachandran, D., Reinartz, W., & Behavior, 128, Article 107093. https://doi.org/10.1016/j.chb.2021.107093
Rindfleisch, A. (2022). Examining artificial intelligence (AI) technologies in
marketing via a global lens: Current trends and future research opportunities.
23
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
O’Neill, S. (April 22, 2022). Word of mouth marketing: Stats and trends for 2023. LXA. Singh, J., & Crisafulli, B. (2015). Customer responses to service failure and recovery
Retrieved from https://www.lxahub.com/stories/word-of-mouth-marketing-stats- experiences. In S. Sahadev, K. Purani, & N. Malhotra (Eds.), Boundary Spanning
and-trends-for-2023. Elements and the Marketing Function in Organizations (pp. 117–135). Cham: Springer.
Park, J., & Ha, S. (2016). Co-creation of service recovery: Utilitarian and hedonic value https://link.springer.com/chapter/10.1007/978-3-319-13440-6_8.
and post-recovery responses. Journal of Retailing and Consumer Services, 28, 310–316. Sinha, J., & Lu, F. C. (2016). “I” value justice, but “we” value relationships: Self-construal
https://doi.org/10.1016/j.jretconser.2015.01.003 effects on post-transgression consumer forgiveness. Journal of Consumer Psychology,
Park, N., Jang, K., Cho, S., & Choi, J. (2021). Use of offensive language in human- 26(2), 265–274. https://doi.org/10.1016/j.jcps.2015.06.002
artificial intelligence chatbot interaction: The effects of ethical ideology, social Smith, A. K., Bolton, R. N., & Wagner, J. (1999). A model of customer satisfaction with
competence, and perceived humanlikeness. Computers in Human Behavior, 121, service encounters involving failure and recovery. Journal of Marketing Research, 36
Article 106795. https://doi.org/10.1016/j.chb.2021.106795 (3), 356–372. https://doi.org/10.1177%2F002224379903600305.
Pica, G., Jaume, L. C., & Pierro, A. (2022). Let’s go forward, I forgive you! On Song, Y., Luximon, A., & Luximon, Y. (2021). The effect of facial features on facial
motivational correlates of interpersonal forgiveness. Current Psychology, 41, anthropomorphic trustworthiness in social robots. Applied Ergonomics, 94, Article
6786–6794. https://doi.org/10.1007/s12144-020-01180-7 103420. https://doi.org/10.1016/j.apergo.2021.103420
Pelau, C., Dabija, D. C., & Ene, I. (2021). What makes an AI device human-like? The role Stiff, J. B., Dillard, J. P., Somera, L., Kim, H., & Sleight, C. (1988). Empathy,
of interaction quality, empathy and perceived psychological anthropomorphic communication, and prosocial behavior. Communication Monographs, 55(2),
characteristics in the acceptance of artificial intelligence in the service industry. 198–213. https://doi.org/10.1080/03637758809376171
Computers in Human Behavior, 122, Article 106855. https://doi.org/10.1016/j. Suhaili, S. M., Salim, N., & Jambli, M. N. (2021). Service chatbots: A systematic review.
chb.2021.106855 Expert Systems with Applications, 184, Article 115461. https://doi.org/10.1016/j.
Plank, R. E., Minton, A. P., & Reid, D. A. (1996). A short measure of perceived empathy. eswa.2021.115461
Psychological Reports, 79(3), 1219–1226. https://doi.org/10.2466/ Talwar, S., Dhir, A., Scuotto, V., & Kaur, P. (2021). Barriers and paradoxical
pr0.1996.79.3f.1219 recommendation behaviour in online to offline (O2O) services. A convergent mixed-
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method method study. Journal of Business Research, 131, 25–39. https://doi.org/10.1016/j.
biases in behavioral research: a critical review of the literature and recommended jbusres.2021.03.049
remedies. Journal of Applied Psychology, 88(5), 879–903. https://psycnet.apa.org/d Tax, S. S., & Brown, S. W. (1998). Recovering and learning from service failure. Mitosz
oi/10.1037/0021-9010.88.5.879. Sloan Management Review, 40(1), 75–88. https://sloanreview.mit.edu/article/re
Press, G. (November 25, 2019). AI stats news: Chatbots increase sales by 67% but 87% of covering-and-learning-from-service-failure/.
consumers prefer humans. Forbes. Retrieved from https://www.forbes.com/sites/gi Teodorescu, M. H., Morse, L., Awwad, Y., & Kane, G. C. (2021). Failures of fairness in
lpress/2019/11/25/ai-stats-news-chatbots-increase-sales-by-67-but-87-of-cons automation require a deeper understanding of human-ML augmentation. MIS
umers-prefer-humans/?sh=2eee137948a3. Quarterly, 45(3). https://doi.org/10.25300/MISQ/2021/16535
Przegalinska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G. (2019). In bot we Tsai, C. T., & Su, C. S. (2009). Service failures and recovery strategies of chain restaurants
trust: A new methodology of chatbot performance measures. Business Horizons, 62 in Taiwan. The Service Industries Journal, 29(12), 1779–1796. https://doi.org/
(6), 785–797. https://doi.org/10.1016/j.bushor.2019.08.005 10.1080/02642060902793599
Qiu, L., & Benbasat, I. (2010). A study of demographic embodiments of product Tsai, W. H. S., Liu, Y., & Chuan, C. H. (2021). How chatbots’ social presence
recommendation agents in electronic commerce. International Journal of Human- communication enhances consumer engagement: The mediating role of parasocial
Computer Studies, 68(10), 669–688. https://doi.org/10.1016/j.ijhcs.2010.05.005 interaction and dialogue. Journal of Research in Interactive Marketing, 15(3), 460–482.
Rabbani, M. R. (2022). Fintech innovations, scope, challenges, and implications in https://doi.org/10.1108/JRIM-12-2019-0200
Islamic Finance: A systematic analysis. International Journal of Computing and Digital Tsarenko, Y., & Tojib, D. R. (2011). A transactional model of forgiveness in the service
Systems, 11(1), 1–28. failure context: A customer-driven approach. Journal of Services Marketing, 25(5),
Rajaobelina, L., Prom Tep, S., Arcand, M., & Ricard, L. (2021). Creepiness: Its 381–392. https://doi.org/10.1108/08876041111149739
antecedents and impact on loyalty when interacting with a chatbot. Psychology & Tsarenko, Y., & Tojib, D. (2012). The role of personality characteristics and service
Marketing, 38(12), 2339–2356. https://doi.org/10.1002/mar.21548 failure severity in consumer forgiveness and service outcomes. Journal of Marketing
Ramesh, A., & Chawla, V. (2022). Chatbots in marketing: A literature review using Management, 28(9–10), 1217–1239. https://doi.org/10.1080/
morphological and co-occurrence analyses. Journal of Interactive Marketing, 57(3). 0267257X.2011.619150
https://doi.org/10.1177/10949968221095549 Vázquez-Casielles, R., del Río-Lanza, A. B., & Díaz-Martín, A. M. (2007). Quality of past
Rapp, A., Curti, L., & Boldi, A. (2021). The human side of human-chatbot interaction: A performance: Impact on consumers’ responses to service failure. Marketing Letters, 18
systematic literature review of ten years of research on text-based chatbots. (4), 249–264. https://doi.org/10.1007/s11002-007-9018-x
International Journal of Human-Computer Studies, 151, Article 102630. https://doi. Venkatesh, V., Brown, S. A., & Sullivan, Y. (2016). Guidelines for conducting mixed-
org/10.1016/j.ijhcs.2021.102630 methods research: An extension and illustration. Venkatesh, V., Brown, SA, and
Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, Sullivan, YW’Guidelines for Conducting Mixed-methods Research: An Extension and
and new media like real people. New York, NR: Cambridge University Press. Illustration,’Journal of the AIS, 17(7), 435–495. https://ssrn.com/abstract=3958485.
Richins, M. L., & Dawson, S. (1992). A consumer values orientation for materialism and Verhagen, T., Nauta, A., & Feldberg, F. (2013). Negative online word-of-mouth:
its measurement: Scale development and validation. Journal of Consumer Research, Behavioral indicator or emotional release. Computers in Human Behavior, 29(4),
19(3), 303–316. https://doi.org/10.1086/209304 1430–1440. https://doi.org/10.1016/j.chb.2013.01.043
Riyanto, Y. E., & Jonathan, Y. X. (2018). Directed trust and trustworthiness in a social Verma, V., Sharma, D., & Sheth, J. (2016). Does relationship marketing matter in online
network: An experimental investigation. Journal of Economic Behavior & retailing? A meta-analytic approach. Journal of the Academy of Marketing Science, 44
Organization, 151, 234–253. https://doi.org/10.1016/j.jebo.2018.04.005 (2), 206–217. https://doi.org/10.1007/s11747-015-0429-6
Roggeveen, A. L., Tsiros, M., & Grewal, D. (2012). Understanding the co-creation effect: Voss, K. E., Spangenberg, E. R., & Grohmann, B. (2003). Measuring the hedonic and
When does collaborating with customers provide a lift to service recovery. Journal of utilitarian dimensions of consumer attitude. Journal of Marketing Research, 40(3),
the Academy of Marketing Science, 40, 771–790. https://doi.org/10.1007/s11747- 310–320. https://doi.org/10.1509/jmkr.40.3.310.19238
011-0274-1 Wang, W., & Benbasat, I. (2007). Recommendation agents for electronic commerce:
Roy, R., & Ng, S. (2012). Regulatory focus and preference reversal between hedonic and Effects of explanation facilities on trusting beliefs. Journal of Management Information
utilitarian consumption. Journal of Consumer Behaviour, 11(1), 81–88. https://doi. Systems, 23(4), 217–246. https://doi.org/10.2753/MIS0742-1222230410
org/10.1002/cb.371 Wang, X., Lin, X., & Shao, B. (2022). How does artificial intelligence create business
Ruan, Y., & Mezei, J. (2022). When do AI chatbots lead to higher customer satisfaction agility? Evidence from chatbots. International Journal of Information Management, 66,
than human frontline employees in online shopping assistance? Considering product Article 102535. https://doi.org/10.1016/j.ijinfomgt.2022.102535
attribute type. Journal of Retailing and Consumer Services, 68, Article 103059. https:// Wang, Y., Zhang, M., Li, S., McLeay, F., & Gupta, S. (2021). Corporate responses to the
doi.org/10.1016/j.jretconser.2022.103059 coronavirus crisis and their impact on electronic-word-of-mouth and trust recovery:
Rye, M. S., Loiacono, D. M., Folck, C. D., Olszewski, B. T., Heim, T. A., & Madia, B. P. Evidence from social media. British Journal of Management, 32(4), 1184–1202.
(2001). Evaluation of the psychometric properties of two forgiveness scales. Current https://doi.org/10.1111/1467-8551.12497
Psychology, 20(3), 260–277. https://doi.org/10.1007/s12144-001-1011-6 Wangenheim, F. V. (2005). Postswitching negative word of mouth. Journal of Service
Schiemann, S. J., Mühlberger, C., Schoorman, F. D., & Jonas, E. (2019). Trust me, I am a Research, 8(1), 67–78. https://doi.org/10.1177%2F1094670505276684.
caring coach: The benefits of establishing trustworthiness during coaching by Wei, C., Liu, M. W., & Keh, H. T. (2020). The road to consumer forgiveness is paved with
communicating benevolence. Journal of Trust Research, 9(2), 164–184. https://doi. money or apology? The roles of empathy and power in service recovery. Journal of
org/10.1080/21515581.2019.1650751 Business Research, 118, 321–334. https://doi.org/10.1016/j.jbusres.2020.06.061
Schuetzler, R. M., Grimes, G. M., Giboney, J. S., & Rosser, H. K. (2021). Deciding whether Wei, J., Wang, Z., Hou, Z., & Meng, Y. (2022). The influence of empathy and consumer
and how to deploy chatbots. MIS Quarterly Executive, 20(1), 1–15. https://doi.org/ forgiveness on the service recovery effect of online shopping. Frontiers in Psychology,
10.17705/2msqe.00039 13, Article 842207. https://doi.org/10.3389%2Ffpsyg.2022.842207.
Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A.
Anthropomorphism and adoption. Journal of Business Research, 115, 14–24. https:// (2018). Brave new world: Service robots in the frontline. Journal of Service
doi.org/10.1016/j.jbusres.2020.04.030 Management, 29(5), 907–931. https://doi.org/10.1108/JOSM-04-2018-0119
Shiota, M. N., Keltner, D., & John, O. P. (2006). Positive emotion dispositions Wu, K. W., Huang, S. Y., Yen, D. C., & Popova, I. (2012). The effect of online privacy
differentially associated with Big Five personality and attachment style. The Journal policy on consumer privacy concern and trust. Computers in Human Behavior, 28(3),
of Positive Psychology, 1(2), 61–71. https://doi.org/10.1080/17439760500510833 889–897. https://doi.org/10.1016/j.chb.2011.12.008
Shukairy, A. (May 4, 2018). Chatbots in customer service – Statistics and trends Xie, Y., & Peng, S. (2009). How to repair customer trust after negative publicity: The
[infographic]. Invespcro. Retrieved from https://www.invespcro.com/blog/chatbot roles of competence, integrity, benevolence, and forgiveness. Psychology &
s-customer-service/. Marketing, 26(7), 572–589. https://doi.org/10.1002/mar.20289
24
A. Agnihotri and S. Bhattacharya International Journal of Information Management 76 (2024) 102679
Xing, X., Song, M., Duan, Y., & Mou, J. (2022). Effects of different service failure types Computers in Human Behavior, 115, Article 106178. https://doi.org/10.1016/j.
and recovery strategies on the consumer response mechanism of chatbots. chb.2019.106178
Technology in Society, 70, Article 102049. https://doi.org/10.1016/j. Zhang, T., Tao, D., Qu, X., Zhang, X., Lin, R., & Zhang, W. (2019). The roles of initial trust
techsoc.2022.102049 and perceived risk in public’s acceptance of automated vehicles. Transportation
Yan, Y., Gupta, S., Licsandru, T. C., & Schoefer, K. (2022). Integrating machine learning, Research Part C: Emerging Technologies, 98, 207–220. https://doi.org/10.1016/j.
modularity and supply chain integration for Branding 4.0. Industrial Marketing trc.2018.11.018
Management, 104, 136–149. https://doi.org/10.1016/j.indmarman.2022.04.013 Zou, S., & Migacz, S. J. (2022). Why service recovery fails? Examining the roles of
Zafar, A. U., Qiu, J., Li, Y., Wang, J., & Shahzad, M. (2021). The impact of social media restaurant type and failure severity in double deviation with justice theory. Cornell
celebrities’ posts and contextual interactions on impulse buying in social commerce. Hospitality Quarterly, 63(2), 169–181. https://doi.org/10.1177/1938965520967921
25