AI EnabledRecommend PURE
AI EnabledRecommend PURE
Article:
Chua, Alton Y.K., Pal, Anjan orcid.org/0000-0001-7203-7126 and Banerjee, Snehasish
orcid.org/0000-0001-6355-0470 (2023) AI-enabled investment advice:Will users buy it?
Computers in Human Behavior. 107481. ISSN 0747-5632
https://doi.org/10.1016/j.chb.2022.107481
Reuse
This article is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs
(CC BY-NC-ND) licence. This licence only allows you to download this work and share it with others as long
as you credit the authors, but you can’t change the article in any way or use it commercially. More
information and the full terms of the licence here: https://creativecommons.org/licenses/
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
emailing [email protected] including the URL of the record and the reason for the withdrawal request.
[email protected]
https://eprints.whiterose.ac.uk/
AI-enabled investment advice: Will users buy it?
Abstract
adoption, Trust.
1. Introduction
The diffusion of artificial intelligence (AI) technologies into our daily lives has picked
up considerable momentum in recent years (Gursoy et al., 2019). The global AI market size,
which was valued at US$27.23 billion in 2019, is expected to reach a staggering US$267
billion by 2027 (Fortune Business Insights, 2020). From self-driving cars to voice-activated
home assistance devices, AI has effectively taken over routine tasks that were previously
done by humans (Bickmore, 2018; Chong et al., 2022; Liu & Tao, 2022; Sloane & Silva,
2020).
To ease decision-making, AI solutions are now available not only for low-stake
activities such as personalized shopping (Ashoori & Weisz, 2019) and news recommendation
(Diakopoulos & Koliska, 2017) but also for situations when choices are highly consequential
as in cancer screening (Jha & Topol, 2016) and prison sentencing (Ashoori & Weisz, 2019).
Yet, public opinion on the general outlook of AI remains divided. Some envision a rose-
tinted future while others see a calamitous apocalypse (Markoff, 2016; Tegmark, 2017; Wu et
al., 2020). Evidence that people buy into machine-generated advice has been mixed (Bigman
& Gray, 2018; Dietvorst et al., 2015; Wickramasinghe et al., 2020). This paper is therefore
motivated by the limited understanding of the conditions under which user acceptance of AI
can be influenced.
Even as research on human-AI interaction continues to gain traction, two gaps could
be identified. First, the underlying psychological mechanism of how users decide to accept
AI-enabled advice is not yet well understood. In tandem with the launch of new AI
recommendation systems, there have been calls for research to better explain humans’
algorithmic reliance (Kleinberg et al., 2018; Logg, 2017). Second, the literature is silent on
the way the level of risk alters the behavioral intention to accept AI-enabled advice (Bao et
al., 2022). Any decision entails some degree of risk, especially if it has to be made in high
involvement contexts such as healthcare and finance where human counsel is usually
preferred to machine-generated advice (Longoni et al., 2019; Zhang et al., 2021). Hence, the
question of how AI uptake can be promoted in such high involvement services remains open.
To address the first research gap, this paper builds on the literature on user behavior
toward technology. From early works (e.g., Ajzen, 1985; Ajzen & Fishben, 1980; Venkatesh
et al., 2003) to contemporary studies (e.g., Dwivedi et al., 2019), attitude has been shown
recommendations. With this as the starting point, this paper further argues that attitude could
also be positively associated with trust (Cheng et al., 2019; Chong et al., 2022; Ho et al.,
2017) and perceived accuracy (Jacobsen et al., 2020; Schaffer et al., 2015), especially when
AI is intended to make estimates and forecasts. Trust and perceived accuracy are important to
be studied given the growing concern of how much black-box AI algorithms promote the
core values of credence, fairness and usefulness (Araujo et al., 2020; Liang et al., 2021;
Ochmann et al., 2021). In this paper, trust refers to users’ willingness to depend on AI for
decision-making based on gut-feeling (Ferrario et al., 2019; Komiak & Benbasat, 2006) while
perceived accuracy is the perception of the extent to which an AI-generated advice reflects
the ideal recommendation free of human biases and errors (Smith & Mentzer, 2010).
Additionally, to address the second research gap, this paper considers the role of risk
associated with financial investment. In particular, stock market investment was used as the
context of investigation because there is currently keen research and practical interests with
applying AI in capital markets (Ho et al., 2017; Sun, 2020). Moreover, the volatility of the
stock market lends itself readily to the study of risk, which involves unforeseen contingencies
(Ho et al., 2017; Pavlou & Fygenson, 2006; Schwert, 1989). Depending on the level of risk,
the readiness to buy into AI’s advice could change. However, the literature remains largely
silent on how risk level interacts with attitude, trust and perceived accuracy in shaping users’
For these reasons, the objective of this paper is to develop and empirically validate a
conceptual model that explains the behavioral intention to accept AI-based recommendations
as a function of attitude toward AI, trust, perceived accuracy and risk level. The proposed
investment recommendation system. A total of 368 participants were randomly and evenly
attitude to be a strong predictor of behavioral intention (Gool et al., 2015; Pember et al.,
2018; Sanakulov & Karjaluoto, 2015; Sanne & Wiese, 2018), this paper takes the relationship
as the point of departure and unpacks it to offer deeper insights. Specifically, it proposes an
AI. In so doing, the paper contributes to the growing body of literature on human-AI
interaction. On the practical front, the findings shed light on the conditions in which AI
acceptance could be enhanced. This can be useful for policy-makers and practitioners who
design interventions to promote society’s behavioral intention to rely on AI. In turn, it can
pave the way for the successful commercialization of new AI systems in high consumer
review and hypotheses development. Section 3 describes the research design and explains
how data were collected and analyzed. Section 4 and Section 5 present the results and the
discussion respectively. Section 6 concludes with theoretical and practical implications of the
paper, as well as acknowledges the limitations and offers possible research directions.
bias and AI aversion (Bigman & Gray, 2018; Chong et al., 2022; Dietvorst et al., 2015;
Tomsett et al., 2020; Wickramasinghe et al., 2020). Automation bias occurs when users
readily buy into computer recommendations instead of relying on their own judgment. At the
other end of the spectrum, AI aversion is exhibited when users reject algorithm-generated
advice (Tomsett et al., 2020). Automation bias and AI aversion tendencies could be shaped
by a variety of factors including cognitive load (Parasuram & Manzey, 2010), accountability
in the decision process (Cummings, 2006), and individuals’ level of expertise and training
(Rzepka & Berger, 2018). For example, findings suggest that the more transparent the
reasoning process of the AI system, the more favorable users will judge its decision quality
(Xu et al., 2014), and hence embrace its recommendation. On the other hand, an overly
autonomous AI system that displays a high degree of humanness can threaten, and thus repel
users (Złotowski et al., 2017). Next, the fit between users’ cognitive model and the system
presentation also influences acceptance (Shmueli et al. 2016). In the same way, users’
demographics such as gender and ethnicity that are congruous to system characteristics such
as avatar appearances can lead to positive system perception (Qiu & Benbasat, 2010). On the
context of use, automation bias is more likely to occur for functional tasks that call for logic
whereas AI aversion is triggered in situations that involve making intuitive and emotional
System characteristics are not investigated in this paper given that AI systems for
investment are typically opaque to protect their commercial advantage and proprietary rights
(Rudin et al., 2018). User characteristics such as gender, age and investment self-efficacy
(Montford & Goldsmith, 2016) are statistically controlled in testing the hypotheses, which are
proposed subsequently for the development of the conceptual model shown in Figure 1. Risk
Attitude toward any object refers to the mindset of an individual formed by prior
knowledge and experience. It turns into a predisposition for how the individual will value the
object subsequently (Persson et al., 2021). For the purpose of this paper, attitude toward AI
refers to the degree to which one views AI favorably (Lichtenthaler, 2019; Ochmann et al.,
2021). In reality, this attitude varies drastically with ebbs and flows of technological
breakthroughs (Markoff, 2016; Tegmark, 2017; Wu et al., 2020). While some consider AI to
have a positive impact on their everyday lives, others fear that it will result in a loss of their
jobs (Bigman & Gray, 2018; Dietvorst et al., 2015; Tegmark, 2017; Wickramasinghe et al.,
2020).
Prior works have consistently found attitude to be one of the key predictors of
behavioral intention (Pember et al., 2018; Sanakulov & Karjaluoto, 2015; Sanne & Wiese,
2018). This stems from the intrinsic motivation to maintain consistency between attitudes and
behaviors (Gool et al., 2015). Hence, attitude toward AI could potentially shape users’
inclination to accept the usage of AI in everyday life (Lichtenthaler, 2019; Persson et al.,
2021). Those with a favorable attitude toward AI could be more willing to accept AI-based
recommendation in the context of stock market investment than those who view AI with
recommendation.
For the purpose of this paper, trust refers to users’ willingness to depend on AI for
decision-making based on gut-feeling (Ferrario et al., 2019; Komiak & Benbasat, 2006), and
perceived accuracy is defined as the perception of the extent to which an AI-generated advice
reflects the ideal recommendation free of human biases and errors (Smith & Mentzer, 2010).
Trust and perceived accuracy are important constructs when it comes to stock market
investment. After all, when AI is intended to make predictions, the behavioral intention to
al., 2019; Chong et al., 2022; Ho et al., 2017) and perceived accuracy of AI (Jacobsen et al.,
2020; Schaffer et al., 2015). The dependence on trust and perceived accuracy could be further
heightened due to the opaque nature of typical investment-related AI systems (Araujo et al.,
Given the volatility of the stock market, investors sometimes contend with regret
aversion, which refers to the fear of choosing an option that could turn out to be a bad one
(Berkelaar et al., 2004; Chang et al., 2008; Noah & Lingga, 2020). This leads either to a
preference for inaction (Sautua, 2017) or making the choice more conscientiously to
inoculate against self-blame (Reb, 2008). However, there is scant research hitherto on how
this dilemma plays out when the burden of decision-making is shifted from the self to
vigilance in decision-making could cause investors to either maintain the status quo and
ignore machine-generated advice, or buy into the recommendations if they consider the
Prior research shows that attitude-induced trust promotes behavioral outcomes (Ho et
al., 2017; Nguyen et al., 2019). In a similar way, perceived accuracy, which is related
positively to attitude, could also motivate behavioral intention (Nourani et al., 2019).
Therefore, while a favorable attitude toward AI seems to be positively associated with trust
and perceived accuracy, the opposite can be expected with an unfavorable attitude. Moreover,
greater levels of trust and accuracy seem likely to result in higher behavioral intention to
accept AI-based recommendation and vice-versa (Cheng et al., 2019; Ho et al., 2017;
Jacobsen et al., 2020; Schaffer et al., 2015). Thus, the following hypotheses are proposed:
recommendation.
based recommendation.
All investments carry some level of risk. For the purpose of this paper, risk is
conceptualized as volatility which refers to how much the price of a stock fluctuates within a
short timeframe (Schwert, 1989). Investing in blue-chip stocks which are associated with
well-established and financially stable companies is regarded as low risk. Not easily subject
to market speculation, the magnitude for their potential upside and downside is muted in the
short term. On the other hand, investing in penny stocks is regarded as high risk because of
Literature on risk taking suggests that the intention to perform a behavior depends on
level of risk involved (Cullen & Gordon, 2007; Sitkin & Pablo, 1992; Sitkin & Weingart,
1995). Investors’ willingness to go for high-risk or low-risk stocks depends on factors such as
investment self-efficacy and the perception of the likelihood of loss (Jasiniak, 2018;
Montford & Goldsmith, 2016). However, there is a dearth of studies on how individuals
been widely applied to understand how individuals navigate their way through challenging
circumstances (Hobfoll, 1989; 2011). Under stress, the threat of resource loss is viewed more
saliently than the hope of resource gain. Hence, the instinct is to invest resources just to
protect against resource loss. Applying the theory in the context of investment, this means
that the attendant stress of a high-risk situation involving penny stocks may compel
individuals to be more vigilant. Even with a favorable attitude toward AI, they would still
make a careful assessment of their trust in AI and perceived accuracy of AI before deciding
situation involving blue-chip stocks would be less dictated by loss aversion tendencies. As
long as they hold a favorable attitude toward AI, they would be willing to accept the
For these reasons, risk level is expected to play a moderating role among attitude,
trust and perceived accuracy in their relationships with intention. In particular, the heightened
vigilance triggered under a high-risk investment situation could lead to stronger relations
between trust and intention as well as perceived accuracy and intention. This has the
inadvertent effect of weakening the relationship between attitude and intention. In other
H6(a): Risk level moderates the relation between attitude toward AI and behavioral
H6(b): Risk level moderates the relation between trust in AI and behavioral intention
behavioral intention to accept AI-based recommendation. The relation is stronger in the high-
3. Methods
hypotheses in the proposed API model of AI acceptance. Two experimental conditions were
set up to manipulate the level of risk. One induced low-risk investment with
recommendations for blue-chip stocks while the other induced high-risk investment with
Prior to the experiment, a pilot study was conducted for the purpose of manipulation
check. A total of 10 participants selected using convenience sampling were asked to rate the
level of risk associated with the two scenarios (Figure 2 and Figure 3) as either high or low.
There was unanimous agreement that both experimental conditions reflected their intended
risk levels.
sampling. The inclusion criterion was that they must have prior experiences with stock
market investment. Data collection proceeded through the following two steps. First, after
they had previously invested in the stock market. They also completed a short questionnaire
to provide demographic data and indicate their investment self-efficacy. Thereafter, they
were asked to imagine they were investors looking to increase their portfolio and were
Participants were told that it uses a proprietary AI algorithm that learns from stocks’
fundamentals, price and volume history to provide unbiased advice to investors. The system
In the second step, participants were randomly and evenly assigned to one of the two
penny stocks provided. This was to ensure they understood the level of risk the stock carried.
shows a BUY recommendation for a penny stock. After that, participants were asked to
indicate their intentions to accept AI-based recommendation. Finally, they responded to a set
of questionnaire items measuring their trust in AI, perceived accuracy of AI, and attitude
toward AI. All items followed a seven-point Likert scale (1=strongly disagree, 7=strongly
agree).
Figure 2: Experimental stimulus depicting low-risk investment recommendation.
Figure 3: Experimental stimulus depicting high-risk investment recommendation.
3.3. Measures
Participants’ gender, age, and investment self-efficacy were used as control variables
in all analyses. Gender was captured as either male or female. Age was captured in years.
Investment self-efficacy was measured using items adapted from Montford and Goldsmith
(2016). The final dependent variable in the API model shown in Figure 1 is behavioral
intention to accept AI-based recommendation. This was measured using items adapted from
Gursoy et al. (2019). Attitude toward AI is the independent variable in the conceptual model.
It was measured using items adapted from Belanche et al. (2019). The other two variables in
the model include trust in AI and perceived accuracy of AI. These were measured using items
adapted from Jamaludin and Ahmad (2013) and Gursoy et al. (2019) respectively. The
Data were analyzed using partial least squares structural equation modeling (PLS-
SEM). To ensure reliability of the measures, Cronbach’s alpha and composite reliability were
used. Validity was checked in terms of convergent validity and discriminant validity.
Common method bias was tested using Harman’s one-factor test. It included all items in a
principal component factor analysis (Podsakoff & Organ, 1986; Shiau & Luo, 2012). More
than one factor emerged, indicating that common method bias was not a problem. The
assessment of the structural model included the coefficient of determination (R2), and the
To examine the moderating effect of risk level, a multi-group analysis was conducted
to compare data from the two experimental conditions of low-risk and high-risk investment
loadings between the latent variables and their indicators were similar for both the groups,
allowing for a meaningful cross-group analysis. Thereafter, the group comparison method
was applied to identify if the standardized path coefficients for the two groups of participants
differed significantly (Keil et al., 2000). The roles of gender, age and investment self-efficacy
were controlled in all the PLS-SEM analyses. In particular, the three control variables were
added by connecting them to the main endogenous variable (users’ intention to accept AI-
based recommendation).
4. Results
An initial pool of 416 participants were invited to this study. Of these, 16 participants
did not respond to the invitation, 19 did not pass the screening check as they had never
invested in the stock market, and 13 dropped midway. Complete responses from 368 (416 -
16 - 19 - 13) participants were thus admitted for analysis. Such a sample size is comparable
while 190 were assigned to the high-risk investment condition. Eight from the first condition
and five from the second dropped midway. The final tallies were 183 participants in the low-
In terms of demographics, 213 (57.9%) were male and 155 (42.1%) were female. The
average age was 31.77 years (Min = 21, Max = 63, SD = 10.80). In terms of educational
qualification, 164 (44.6%) participants had a bachelor’s degree, 144 (39.1%) had a master’s
degree, 23 (6.3%) had ‘O’ or ‘A’ level qualifications, 21 (5.7%) were at diploma/advanced
diploma level, and the other 16 (4.3%) participants had a doctoral degree. In terms of
participants’ experience in the stock market investment, 97 (26.4%) had less than one-year
experience, 126 (34.2%) had one year to less than three years of experience, 97 (26.4%) had
three years to less than six years of experience, and 48 (13%) had greater than six years of
Education (frequency)
'O' or 'A' Levels 23 (6.3%) 0 (0%) 23 (12.4%)
Diploma/Advanced Diploma 21 (5.7%) 5 (2.7%) 16 (8.6%)
Bachelor 164 (44.6%) 82 (44.8%) 82 (44.3%)
Master 144 (39.1%) 88 (48.1%) 56 (30.3%)
Doctoral 16 (4.3%) 8 (4.4%) 8 (4.3%)
Behavioral intention to accept AI- 4.34 ± 1.65 4.85 ± 1.47 3.84 ± 1.66
based recommendation (M ± SD)
Cronbach’s Alpha (α), composite reliability (CR), and average variance extracted
(AVE) for all the constructs are reported in Table 3. The Cronbach’s α values exceeded the
threshold of 0.7, confirming internal consistency of the measures (Nunnally, 1978). All CR
and AVE values exceeded 0.7 and 0.5 respectively, indicating acceptable convergent validity
(Fornell & Larcker, 1981). Moreover, all items loaded on their respective constructs as shown
As described in Section 3.4, each of the hypotheses was tested using PLS-SEM. The
statistical significance of the path coefficients was assessed. The control variables (gender,
0.34, p > 0.05; age: β = -0.004, t = 0.05, p > 0.05; self-efficacy: β = 0.08, t = 0.77, p > 0.05).
After accounting for the control variables, the following hypothesized relationships
were found to be significant: Attitude toward AI was positively associated with behavioral
intention to accept AI-based recommendation (β = 0.54, t = 3.62, p < 0.001). This lends
support to H1. Next, attitude toward AI was positively associated with trust in AI (β = 0.71, t
= 10.42, p < 0.001) and perceived accuracy of AI (β = 0.68, t = 10.13, p < 0.001), which lend
However, the relationships of trust and perceived accuracy with behavioral intention
to accept AI-based recommendation were not significant. Therefore, H4 and H5 are not
supported. Table 5 summarizes the results of testing the hypotheses H1-H5 using PLS-SEM.
R2 Value
Trust in AI 50.4%
Perceived accuracy of AI 46.2%
Behavioral intention to accept AI-based recommendation 33.5%
Note. * p < 0.05, ** p < 0.01, *** p < 0.001.
Control variables: Gender, Age, Investment self-efficacy.
The R2 values for the endogenous constructs including trust in AI, perceived accuracy
of AI, and behavioral intention to accept AI-based recommendation were 50.4%, 46.2% and
33.5% respectively. The cross-validated redundancy measure (Q2) was also examined. With
an omission distance of seven, the positive Q2 (Q2 > 0) values for the endogenous constructs
ensured that the model fit well with the data (Hair et al., 2019).
To test the moderating effect of risk level, a multi-group PLS analysis was conducted.
Statistical tests were performed to check the homogeneity of the two groups in terms of the
control variables of gender, age, and investment self-efficacy. With respect to gender, Chi-
square results indicated no significant difference (χ2(1, N = 368) = 0.05, Cramer’s V = 0.01,
p > 0.05). With respect to age, there was a significant difference between the two groups;
t(345.03) = 4.65, p < 0.01. Participants’ age in the low-risk condition (34.33 ± 11.68) was
significantly higher than that in the high-risk condition (29.24 ± 9.19). With respect to
investment self-efficacy, there was a statistically significant difference between the two
groups; t(366) = 2.66, p < 0.01. Participants’ investment self-efficacy in the low-risk
condition (4.08 ± 1.42) was significantly higher than that in the high-risk condition (3.70 ±
1.38). That said, the control variables remained consistently non-significant in the high-risk
condition (gender: β = 0.03, t = 0.44, p > 0.05; age: β = 0.02, t = 0.4, p > 0.05; self-efficacy: β
= 0.11, t = 1.2, p > 0.05) as well as the low-risk condition (gender: β = -0.1, t = 1.04, p >
0.05; age: β = -0.03, t = 0.3, p > 0.05; self-efficacy: β = 0.01, t = 0.1, p > 0.05). The results of
the API model for the low-risk and the high-risk conditions are depicted in Figure 4 and
Figure 5 respectively.
between the two groups for the relationship between attitude toward AI and behavioral
intention to accept AI-based recommendation. Compared with the participants in the high-
risk situation, those in the low-risk situation showed a stronger relation (t = 4.08, p < 0.001).
Furthermore, there was a significant difference between the two groups for the
Compared with the participants in the low-risk situation, those in the high-risk situation
Finally, there was also a significant difference between the two groups for the
risk situation showed a stronger relation (t = -24.79, p < 0.001). This lends support to H6(c).
R2 Value
Trust in AI 46% 56.3%
Perceived accuracy of AI 43.3% 53.7%
Behavioral intention to accept AI- 12.2% 62%
based recommendation
5. Discussion
Four major findings could be gleaned from this research. First, based on the results
corresponding to H1, attitude toward AI was positively associated with behavioral intention
suggests that the attitude toward AI could be less favorable for black-box vis-à-vis
transparent systems (Ochmann et al., 2021), this paper reveals that users’ attitude still plays a
crucial role in the case of opaque AI systems. As long as they hold a favorable attitude
toward AI systems, users seem to accept their inability to understand the underlying
computational complexities.
Second, from the results corresponding to H2 and H3, attitude toward AI was
positively associated with trust in AI (β = 0.71, p < 0.001) and perceived accuracy of AI (β =
0.68, p < 0.001). This is generally consistent with long-standing research findings (e.g.,
Dwivedi et al. 2019; Venkatesh et al., 2003) that attitude is not only a key predictor of
embracing technology but also shapes trust and perceived accuracy of what technology can
offer. This persistent importance of attitude has implications for research in human-AI
interaction. Going forward, as AI becomes more pervasive, it is important for public debate
Neither automation bias nor AI aversion is helpful to society (Bigman & Gray, 2018; Chong
et al., 2022; Dietvorst et al., 2015; Tomsett et al., 2020; Wickramasinghe et al., 2020).
Instead, it would be wise to focus realistically on what AI can do, appreciate its potential, and
Third, the results corresponding to H4 and H5 show that neither trust in AI nor
perceived accuracy of AI was significantly associated with behavioral intention to accept AI-
based recommendations in the full sample. This is at odds with prior research (Ho et al.,
2017; Liu & Tao, 2022; Schaffer et al., 2015) and could be attributed to the unique context of
has not been explored hitherto. Thus, the paper not only expands the contextual scope of the
human-AI interaction literature but also enriches it with a counter-intuitive finding that
warrants further inquiry. Future research is needed to shed light on how perception-related
different contexts.
Fourth, from the results corresponding to H6, risk level moderated how attitude, trust
low risk situations. It is evident that the forces affecting users’ decision to embrace AI are
Prior research suggests that users tend to rely on automation for tasks that call for
logic (Gaudiello et al., 2016; Logg, 2017). Extending the literature, this paper shows that
even for a task such as investment decision-making that may also involve intuition, users
mechanism of accepting machine-generated advice depends on the level of risk. When risk is
low, a favourable attitude toward AI seems sufficient to promote machine reliance. However,
when risk is high, a favourable attitude toward AI is a necessary but no longer sufficient
condition for AI acceptance. Instead, to cope with the risk, users carefully deliberate on their
advice. In other words, compared with the low-risk condition involving blue-chip stocks, the
high-risk condition involving penny stocks compelled the participants to be more vigilant in
their decision-making.
6. Conclusion
recommendations as a function of attitude toward AI, trust, perceived accuracy and risk level.
using a simulated AI-enabled investment recommendation system. The results reveal that
moderates how attitude, trust and perceived accuracy vary with behavioral intention to accept
AI-based recommendations.
6.1. Theoretical Contributions
On the theoretical front, the paper contributes to the human-AI interaction literature in
three ways. First, it proposes an attitude-perception-intention (API) model that sheds light on
the underlying psychological mechanism of how users decide to accept AI-enabled advice.
The model enhances current understanding of the relation between attitude toward AI and
behavioral intention to accept AI-based recommendation (Ho et al., 2017; Liu & Tao, 2022;
Schaffer et al., 2015) by taking into account trust, perceived accuracy and risk level. It shows
specifically, on the level of risk. When risk is low, a favourable attitude toward AI is enough.
However, when risk is high, a favourable attitude alone is no longer sufficient for AI
acceptance. In a state of heightened alert, users become more careful in assessing their trust
recommendations. Put differently, the API model not only deepens the understanding of the
condition.
tasks that call for intuition in finance—an example of a high involvement service—where
human counsel is usually preferred to machine-generated advice (Longoni et al., 2019; Zhang
et al., 2021). Prior research suggests that users readily accept AI especially when dealing with
rule-based and routine work (Gaudiello et al., 2016; Logg, 2017). Extending the literature,
this paper argues that users are also amenable to AI-based recommendations for tasks such as
making investment decisions that demand intuitive judgements. Depending on attitude, trust,
perceived accuracy and risk level, there could be a case for AI acceptance.
Three, this paper represents one of the earliest attempts to apply the conservation of
resources theory in the context of stock market investment. It validates the argument that the
stocks (Hobfoll, 1989; 2011). On the other hand, when investing in blue-chip stocks where
the threat of resource loss is perceived to be minimal, users tend to let their guard down in
making decisions. Additionally, this paper adds to the literature on risk (Bao et al., 2022) by
showing how the level of risk plays a moderating role in AI acceptance. Specifically, in a
high-risk situation, high trust and perceived accuracy are needed for users to buy into AI-
based recommendations.
On the practical front, the paper offers insights into how the uptake of AI
and finance where machine-generated advice has received much resistance (Longoni et al.,
2019; Zhang et al., 2021). As new AI recommendation systems proliferate, it is important for
policymakers to ensure that the public develops a realistic attitude toward AI.
tailored according to the decision-making context. For example, in situations where there is
high risk, successful performance of the systems in the past could be recounted to inspire user
Two limitations in this paper need to be acknowleged. One, as with all quantitative
studies, it was not possible to gain richer insights into how the participants made decisions
whether to accept AI-enabled advice. Future research could build on the proposed API model
by using interviews or focus groups to identify other constructs that further explain the
amount of investable assets was specified in the experiment. Neither were participants
presented with scenarios where an investment portfolio could comprise both high-risk and
low-risk stocks in different proportions. Hence, future research could consider refining the
experiment to reflect a more realistic context under which investment decisions are made.
Hopefully, this will deepen our understanding of how users decide whether to embrace AI.
References
Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In J. Kuhl & J.
Beckman (Eds.), Action-control: From cognition to behavior (pp. 11–39). Heidelberg:
Springer.
Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior.
Englewood Cliffs, NJ: Prentice-Hall.
Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust?
Perceptions about automated decision-making by artificial intelligence. AI & Society,
35(3), 611-623.
Ashoori, M., & Weisz, J. D. (2019). In AI we trust? Factors that influence trustworthiness of
AI-infused decision-making processes. arXiv preprint. Retrieved from
https://arxiv.org/abs/1912.02675
Bao, L., Krause, N. M., Calice, M. N., Scheufele, D. A., Wirz, C. D., Brossard, D., ... &
Xenos, M. A. (2022). Whose AI? How different publics think about AI and its social
impacts. Computers in Human Behavior, 130, 107182.
Belanche, D., Casaló, L.V. & Flavián, C. (2019). Artificial Intelligence in FinTech:
understanding robo-advisors adoption among customers. Industrial Management &
Data Systems, 119(7), 1411-1430.
Berkelaar, A. B., Kouwenberg, R., & Post, T. (2004). Optimal portfolio choice under loss
aversion. Review of Economics and Statistics, 86(4), 973-987.
Bickmore, T. W., Trinh, H., Olafsson, S., O'Leary, T. K., Asadi, R., Rickles, N. M., & Cruz,
R. (2018). Patient and consumer safety risks when using conversational assistants for
medical information: an observational study of Siri, Alexa, and Google Assistant.
Journal of Medical Internet Research, 20(9), e11510.
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions.
Cognition, 181, 21–34.
Chang, M., Ng, J., & Yu, K. (2008). The influence of analyst and management forecasts on
investor decision making: An experimental approach. Australian Journal of
Management, 33(1), 47-67.
Cheng, X., Guo, F., Chen, J., Li, K., Zhang, Y., & Gao, P. (2019). Exploring the trust
influencing mechanism of robo-advisor service: A mixed method approach.
Sustainability, 11(18), Article 4917.
Choe, Y. C., Park, J., Chung, M., & Moon, J. (2009). Effect of the food traceability system
for building trust: Price premium and buying behavior. Information Systems
Frontiers, 11(2), 167-179.
Chong, L., Zhang, G., Goucher-Lambert, K., Kotovsky, K., & Cagan, J. (2022). Human
confidence in artificial intelligence and in themselves: The evolution and impact of
confidence on adoption of AI advice. Computers in Human Behavior, 127, 107018.
Cullen, J. B., & Gordon, R. H. (2007). Taxes and entrepreneurial risk-taking: Theory and
evidence for the US. Journal of Public Economics, 91(7-8), 1479-1505.
Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital
Journalism, 5(7), 809-828.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People
erroneously avoid algorithms after seeing them err. Journal of Experimental
Psychology: General, 144(1), 114–126.
Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-
examining the unified theory of acceptance and use of technology (UTAUT):
Towards a revised theoretical model. Information Systems Frontiers, 21(3), 719-734.
Ferrario, A., Loi, M., & Viganò, E. (2019). In AI we trust Incrementally: A multi-layer model
of trust to analyze Human-Artificial intelligence interactions. Philosophy &
Technology, 1-17.
Fornell, C., & Larcker, D. (1981). Structural equation models with unobserved variables and
measurement error: Algebra and Statistics, Journal of Marketing Research, 18(3),
382-388.
Fortune Business Insights. (2020). Technology & media: Artificial intelligence market.
Fortune Business Insights. Retrieved from
https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-
market-100114
Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., & Ivaldi, S. (2016). Trust as indicator of
robot functional and social acceptance. An experimental study on user conformation
to iCub answers. Computers in Human Behavior, 61, 633-655.
Gool, E. V., Ouytsel, J. V., Ponnet, K., & Walrave, M. (2015). To share or not to share?
Adolescents’ self-disclosure about peer relationships on Facebook: An application of
the prototype willingness model. Computers in Human Behavior, 44, 230-239.
Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumer’s acceptance of artificially
intelligent (AI) device use in service delivery. International Journal of Information
Management, 49, 157-169.
Guszcza, J., Lewis, H., & Evans-Greenwood, P. (2017). Cognitive collaboration: Why
humans and computers think better together. Deloitte Review, 20, 8-29.
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report
the results of PLS-SEM. European Business Review, 31(1), 2-24.
Ho, S. M., Ocasio-Velázquez, M., & Booth, C. (2017). Trust or consequences? Causal effects
of perceived risk and subjective norms on cloud technology adoption. Computers &
Security, 70, 581-595.
Jacobsen, R. M., Bysted, L., Johansen, P. S., Papachristos, E., & Skov, M. B. (2020).
Perceived and measured task effectiveness in human-AI collaboration. In Extended
Abstracts of the Conference on Human Factors in Computing Systems (pp. 1-9).
ACM.
Jamaludin, A., & Ahmad, F. (2013). Investigating the relationship between trust and intention
to purchase online. Business and Management Horizons, 1(1), 1-9.
Jha, S., & Topol, E. J. (2016). Adapting to artificial intelligence: Radiologists and
pathologists as information specialists. Jama, 316(22), 2353-2354.
Keil, M., Tan, B., Wei, K. K., & Saarinen, T. (2000). A cross-cultural study on escalation of
commitment behavior in software projects. MIS Quarterly, 24(2), 299-325.
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2018). Human
decisions and machine predictions. The Quarterly Journal of Economics, 133(1), 237-
293.
Komiak, S., & Benbasat, I. (2006). The effects of personalization and familiarity on trust and
adoption of recommendation agents. MIS Quarterly, 30(4), 941-960.
Lai, J. Y. (2009). How reward, computer self‐efficacy, and perceived power security affect
knowledge management systems success: An empirical investigation in high‐tech
companies. Journal of the American Society for Information Science and Technology,
60(2), 332-347.
Li, N. L., & Zhang, P. (2005). The intellectual development of human-computer interaction
research: A critical assessment of the MIS literature (1990-2002). Journal of the
Association for information Systems, 6(11), Article 9.
Liang, T., Robert, L., Sarker, S., Cheung, C. M., Matt, C., Trenz, M., & Turel, O. (2021).
Artificial intelligence and robots in individuals' lives: how to align technological
possibilities and ethical issues. Internet Research, 31(1), 1-10.
Lichtenthaler, U. (2019). Extremes of acceptance: Employee attitudes toward artificial
intelligence. Journal of Business Strategy, 41(5), 39-45.
Liu, K., & Tao, D. (2022). The roles of trust, personalization, loss of privacy, and
anthropomorphism in public acceptance of smart healthcare services. Computers in
Human Behavior, 127, 107026.
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial
intelligence. Journal of Consumer Research, 46(4), 629-650.
Lu, L., Cai, R., & Gursoy, D. (2019). Developing and validating a service robot integration
willingness scale. International Journal of Hospitality Management, 80, 36-51.
Manzey, D., Reichenbach, J., & Onnasch, L. (2012). Human performance consequences of
automated decision aids: The impact of degree of automation and system experience.
Journal of Cognitive Engineering and Decision Making, 6(1), 57-87.
Markoff, J. (2016). Machines of loving grace: The quest for common ground between
humans and robots. Harper Collins Publishers.
Montford, W., & Goldsmith, R. E. (2016). How gender and financial self‐efficacy influence
investment risk taking. International Journal of Consumer Studies, 40(1), 101-106.
Nguyen, T. T. H., Nguyen, N., Nguyen, T. B. L., Phan, T. T. H., Bui, L. P., & Moon, H. C.
(2019). Investigating consumer attitude and intention towards online food purchasing
in an emerging economy: An extended TAM approach. Foods, 8(11), Article 576.
Noah, S., & Lingga, M. T. P. (2020). The Effect of Behavioral Factors in Investor’s
Investment Decision. In Conference Series (Vol. 3, No. 1, pp. 398-413).
Nourani, M., Kabir, S., Mohseni, S., & Ragan, E. D. (2019). The effects of meaningful and
meaningless explanations on trust and perceived system accuracy in intelligent
systems. In Proceedings of the AAAI Conference on Human Computation and
Crowdsourcing (Vol. 7, No. 1, pp. 97-105).
Nunnally, J.C. (1978). Psychometric theory, 2nd ed. New York, NY: McGraw-Hill.
Ochmann, J., Zilker, S., & Laumer, S. (2021). The evaluation of the black box problem for
AI-based recommendations: An interview-based study. In International Conference
on Wirtschaftsinformatik (pp. 232-246). Springer, Cham.
Pavlou, P. A., & Fygenson, M. (2006). Understanding and predicting electronic commerce
adoption: An extension of the theory of planned behavior. MIS Quarterly, 30(1), 115-
143.
Pember, S. E., Zhang, X., Baker, K., & Bissell, K. (2018). Application of the theory of
planned behavior and uses and gratifications theory to food-related photo-sharing on
social media. Californian Journal of Health Promotion, 16(1), 91-98.
Persson, A., Laaksoharju, M., & Koga, H. (2021). We mostly think alike: Individual
differences in attitude towards AI in Sweden and Japan. The Review of Socionetwork
Strategies, 15(1), 123-142.
Reb, J. (2008). Regret aversion and decision process quality: Effect of regret salience on
decision process carefulness. Organizational Behavior and Human Decision
Processes. 105(2), 169-182.
Rudin, C., Wang, C., & Coker, B. (2018). The age of secrecy and unfairness in recidivism
prediction. arXiv preprint arXiv:1811.00731. Retrieved from
https://arxiv.org/abs/1811.00731
Rzepka, C., & Berger, B. (2018). User Interaction with AI-enabled Systems: A Systematic
Review of IS Research. Thirty Ninth International Conference on Information
Systems, Article 7.
Sanne, P. N., & Wiese, M. (2018). The theory of planned behaviour and user engagement
applied to Facebook advertising. South African Journal of Information Management,
20(1), 1-10.
Sautua, S. (2017). Does risk cause inertia in decision making? An experimental study of the
role of regret aversion and indecisiveness? Journal of Economic Behavior &
Organization, 136, 1-14.
Schaffer, J., Hollerer, T., & O'Donovan, J. (2015). Hypothetical recommendation: A study of
interactive profile manipulation behavior for recommender systems. In Proceedings of
the International Florida Artificial Intelligence Research Society Conference (pp.
507-512). AAAI.
Schwert, G. W. (1989). Why does stock market volatility change over time? The Journal of
Finance, 44(5), 1115-1153.
Shin, D. (2020). How do users interact with algorithm recommender systems? The interaction
of users, algorithms, and performance. Computers in Human Behavior, 109, 106344.
Sloane, E. B., & Silva, R. J. (2020). Artificial intelligence in medical devices and clinical
decision support systems. In Clinical Engineering Handbook (pp. 556-568). Academic
Press.
Sitkin, S. B., & Pablo, A. L. (1992). Reconceptualizing the determinants of risk behavior.
Academy of Management Review, 17(1), 9-38.
Smith, C. D., & Mentzer, J. T. (2010). User influence on the relationship between forecast
accuracy, application and logistics performance. Journal of Business Logistics, 31(1),
159-177.
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.
Tomsett, R., Preece, A., Braines, D., Cerutti, F., Chakraborty, S., Srivastava, M., ... &
Kaplan, L. (2020). Rapid trust calibration through interpretable and risk-aware AI.
Patterns, 1(4), Article 100049.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of
information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.
Waweru, N. M., Munyoki, E., & Uliana, E. (2008). The effects of behavioural factors in
investment decision-making: a survey of institutional investors operating at the
Nairobi Stock Exchange. International Journal of Business and Emerging Markets,
1(1), 24-41.
Wickramasinghe, C. S., Marino, D. L., Grandio, J., & Manic, M. (2020). Trustworthy AI
development guidelines for human system interaction. In Proceedings of the
International Conference on Human System Interaction (pp. 130-136). IEEE.
Williams, M. D. (2021). Social commerce and the mobile platform: Payment and security
perceptions of potential users. Computers in Human behavior, 115, 105557.
Wu, Y., Mou, Y., Li, Z., & Xu, K. (2020). Investigating American and Chinese subjects’
explicit and implicit perceptions of AI-generated artistic work. Computers in Human
Behavior, 104, 106186.
Xu, D., Huang, W. W., Wang, H., and Heales, J. (2014). Enhancing E-Learning Effectiveness
Using an Intelligent Agent-Supported Personalized Virtual Learning Environment: An
Empirical Investigation. Information & Management, 51(4), 430-440.
Zhang, L., Pentina, I., & Fan, Y. (2021). Who do you choose? Comparing perceptions of
human vs robo-advisor in the context of financial services. Journal of Services
Marketing, 35(5), 634-646.
Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous
robots threaten human identity, uniqueness, safety, and resources. International
Journal of Human-Computer Studies, 100, 48-54.
Appendix A
Table A1: Item loadings and cross loadings for high-risk condition.
(1) (2) (3) (4) (5)
Investment Behavioral Attitude Trust Perceived
self-efficacy intention to toward in AI accuracy
accept AI-based AI of AI
Constructs Items recommendation
Item 1 0.88 0.46 0.48 0.36 0.39
(1) Item 2 0.82 0.48 0.57 0.40 0.41
Item 3 0.88 0.51 0.45 0.27 0.35
Item 1 0.44 0.94 0.70 0.63 0.65
(2) Item 2 0.46 0.97 0.72 0.68 0.65
Item 3 0.38 0.97 0.73 0.64 0.63
Item 1 0.54 0.71 0.94 0.69 0.72
(3) Item 2 0.59 0.72 0.95 0.71 0.70
Item 3 0.51 0.70 0.94 0.72 0.67
Item 1 0.49 0.67 0.77 0.84 0.72
(4) Item 2 0.30 0.54 0.59 0.88 0.51
Item 3 0.17 0.49 0.50 0.82 0.50
Item 1 0.42 0.66 0.68 0.67 0.91
(5) Item 2 0.32 0.50 0.53 0.53 0.83
Item 3 0.44 0.61 0.72 0.64 0.92
Note. The bolded values indicate the loading of each item to a construct in the respective
columns. The other values indicate the cross loadings.
Table A2: Item loadings and cross loadings for low-risk condition.
(1) (2) (3) (4) (5)
Investment Behavioral Attitude Trust Perceived
self-efficacy intention to toward in AI accuracy
accept AI-based AI of AI
Constructs Items recommendation
Item 1 0.89 0.12 0.37 0.26 0.23
(1) Item 2 0.87 0.12 0.43 0.25 0.21
Item 3 0.89 0.12 0.29 0.15 0.18
Item 1 0.12 0.93 0.27 0.08 0.06
(2) Item 2 0.13 0.96 0.31 0.12 0.10
Item 3 0.13 0.93 0.27 0.14 0.11
Item 1 0.38 0.31 0.93 0.59 0.61
(3) Item 2 0.39 0.26 0.93 0.66 0.62
Item 3 0.37 0.28 0.92 0.64 0.60
Item 1 0.30 0.16 0.67 0.89 0.73
(4) Item 2 0.28 0.11 0.65 0.90 0.72
Item 3 0.17 0.06 0.26 0.72 0.47
Item 1 0.23 0.14 0.62 0.75 0.94
(5) Item 2 0.14 0.06 0.55 0.66 0.93
Item 3 0.27 0.05 0.65 0.74 0.87
Note. The bolded values indicate the loading of each item to a construct in the respective
columns. The other values indicate the cross loadings.