0% found this document useful (0 votes)
14 views36 pages

AI EnabledRecommend PURE

The paper presents an attitude-perception-intention (API) model to understand user acceptance of AI-based investment recommendations, highlighting the roles of attitude, trust, and perceived accuracy, with risk level as a moderator. An experiment with 368 participants showed that a favorable attitude toward AI is crucial for acceptance, especially in low-risk scenarios, while high-risk situations require both a favorable attitude and trust in AI. The findings contribute to the literature on human-AI interaction and provide insights for promoting AI acceptance in high-involvement sectors like finance.

Uploaded by

Ivan Medić
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views36 pages

AI EnabledRecommend PURE

The paper presents an attitude-perception-intention (API) model to understand user acceptance of AI-based investment recommendations, highlighting the roles of attitude, trust, and perceived accuracy, with risk level as a moderator. An experiment with 368 participants showed that a favorable attitude toward AI is crucial for acceptance, especially in low-risk scenarios, while high-risk situations require both a favorable attitude and trust in AI. The findings contribute to the literature on human-AI interaction and provide insights for promoting AI acceptance in high-involvement sectors like finance.

Uploaded by

Ivan Medić
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

This is a repository copy of AI-enabled investment advice:Will users buy it?.

White Rose Research Online URL for this paper:


https://eprints.whiterose.ac.uk/190888/

Version: Accepted Version

Article:
Chua, Alton Y.K., Pal, Anjan orcid.org/0000-0001-7203-7126 and Banerjee, Snehasish
orcid.org/0000-0001-6355-0470 (2023) AI-enabled investment advice:Will users buy it?
Computers in Human Behavior. 107481. ISSN 0747-5632

https://doi.org/10.1016/j.chb.2022.107481

Reuse
This article is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs
(CC BY-NC-ND) licence. This licence only allows you to download this work and share it with others as long
as you credit the authors, but you can’t change the article in any way or use it commercially. More
information and the full terms of the licence here: https://creativecommons.org/licenses/

Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
emailing [email protected] including the URL of the record and the reason for the withdrawal request.

[email protected]
https://eprints.whiterose.ac.uk/
AI-enabled investment advice: Will users buy it?

Abstract

This paper develops an attitude-perception-intention (API) model of AI acceptance to explain


individuals’ behavioral intention to accept AI-based recommendations as a function of
attitude toward AI, trust and perceived accuracy with risk-level as a moderator. The API
model was empirically validated through a between-participants experiment (N = 368) using
a simulated AI-enabled investment recommendation system. One experimental condition
depicted low-risk investment recommendation involving blue-chip stocks while the other
depicted high-risk investment recommendation involving penny stocks. Attitude toward AI
predicted behavioral intention to accept AI-based recommendations, trust in AI, and
perceived accuracy of AI. Furthermore, risk level emerged as a significant moderator. When
risk was low, a favourable attitude toward AI seemed sufficient to promote algorithmic
reliance. However, when risk was high, a favourable attitude toward AI was a necessary but
no longer sufficient condition for AI acceptance. The API model contributes to the human-AI
interaction literature by not only shedding light on the underlying psychological mechanism
of how users buy into AI-enabled advice but also adding to the scholarly understanding of AI
recommendation systems in tasks that call for intuition in high involvement services such as
finance where human counsel is usually preferred to machine-generated advice.

Keywords: AI-based recommendation, Decision Sciences, Investment decision, Technology

adoption, Trust.

1. Introduction

The diffusion of artificial intelligence (AI) technologies into our daily lives has picked

up considerable momentum in recent years (Gursoy et al., 2019). The global AI market size,

which was valued at US$27.23 billion in 2019, is expected to reach a staggering US$267

billion by 2027 (Fortune Business Insights, 2020). From self-driving cars to voice-activated

home assistance devices, AI has effectively taken over routine tasks that were previously

done by humans (Bickmore, 2018; Chong et al., 2022; Liu & Tao, 2022; Sloane & Silva,

2020).

To ease decision-making, AI solutions are now available not only for low-stake

activities such as personalized shopping (Ashoori & Weisz, 2019) and news recommendation
(Diakopoulos & Koliska, 2017) but also for situations when choices are highly consequential

as in cancer screening (Jha & Topol, 2016) and prison sentencing (Ashoori & Weisz, 2019).

Yet, public opinion on the general outlook of AI remains divided. Some envision a rose-

tinted future while others see a calamitous apocalypse (Markoff, 2016; Tegmark, 2017; Wu et

al., 2020). Evidence that people buy into machine-generated advice has been mixed (Bigman

& Gray, 2018; Dietvorst et al., 2015; Wickramasinghe et al., 2020). This paper is therefore

motivated by the limited understanding of the conditions under which user acceptance of AI

can be influenced.

Even as research on human-AI interaction continues to gain traction, two gaps could

be identified. First, the underlying psychological mechanism of how users decide to accept

AI-enabled advice is not yet well understood. In tandem with the launch of new AI

recommendation systems, there have been calls for research to better explain humans’

algorithmic reliance (Kleinberg et al., 2018; Logg, 2017). Second, the literature is silent on

the way the level of risk alters the behavioral intention to accept AI-enabled advice (Bao et

al., 2022). Any decision entails some degree of risk, especially if it has to be made in high

involvement contexts such as healthcare and finance where human counsel is usually

preferred to machine-generated advice (Longoni et al., 2019; Zhang et al., 2021). Hence, the

question of how AI uptake can be promoted in such high involvement services remains open.

To address the first research gap, this paper builds on the literature on user behavior

toward technology. From early works (e.g., Ajzen, 1985; Ajzen & Fishben, 1980; Venkatesh

et al., 2003) to contemporary studies (e.g., Dwivedi et al., 2019), attitude has been shown

consistently to predict behavioral intention to engage with technological innovations. Attitude

toward AI is thus expected to relate positively to the acceptance of AI-based

recommendations. With this as the starting point, this paper further argues that attitude could

also be positively associated with trust (Cheng et al., 2019; Chong et al., 2022; Ho et al.,
2017) and perceived accuracy (Jacobsen et al., 2020; Schaffer et al., 2015), especially when

AI is intended to make estimates and forecasts. Trust and perceived accuracy are important to

be studied given the growing concern of how much black-box AI algorithms promote the

core values of credence, fairness and usefulness (Araujo et al., 2020; Liang et al., 2021;

Ochmann et al., 2021). In this paper, trust refers to users’ willingness to depend on AI for

decision-making based on gut-feeling (Ferrario et al., 2019; Komiak & Benbasat, 2006) while

perceived accuracy is the perception of the extent to which an AI-generated advice reflects

the ideal recommendation free of human biases and errors (Smith & Mentzer, 2010).

Additionally, to address the second research gap, this paper considers the role of risk

associated with financial investment. In particular, stock market investment was used as the

context of investigation because there is currently keen research and practical interests with

applying AI in capital markets (Ho et al., 2017; Sun, 2020). Moreover, the volatility of the

stock market lends itself readily to the study of risk, which involves unforeseen contingencies

(Ho et al., 2017; Pavlou & Fygenson, 2006; Schwert, 1989). Depending on the level of risk,

the readiness to buy into AI’s advice could change. However, the literature remains largely

silent on how risk level interacts with attitude, trust and perceived accuracy in shaping users’

inclination toward AI.

For these reasons, the objective of this paper is to develop and empirically validate a

conceptual model that explains the behavioral intention to accept AI-based recommendations

as a function of attitude toward AI, trust, perceived accuracy and risk level. The proposed

model is tested through a between-participants experiment using a simulated AI-enabled

investment recommendation system. A total of 368 participants were randomly and evenly

assigned to one of two experimental conditions, one depicting low-risk investment

recommendation while the other depicting high-risk investment recommendation.


The paper is significant for both theory and practice. While prior research suggests

attitude to be a strong predictor of behavioral intention (Gool et al., 2015; Pember et al.,

2018; Sanakulov & Karjaluoto, 2015; Sanne & Wiese, 2018), this paper takes the relationship

as the point of departure and unpacks it to offer deeper insights. Specifically, it proposes an

attitude-perception-intention (API) model of AI acceptance with the level of risk expected to

play a moderating role. Perception is conceptualized as trust in AI and perceived accuracy of

AI. In so doing, the paper contributes to the growing body of literature on human-AI

interaction. On the practical front, the findings shed light on the conditions in which AI

acceptance could be enhanced. This can be useful for policy-makers and practitioners who

design interventions to promote society’s behavioral intention to rely on AI. In turn, it can

pave the way for the successful commercialization of new AI systems in high consumer

involvement industries such as healthcare and finance.

The remainder of the paper proceeds as follows. Section 2 is dedicated to literature

review and hypotheses development. Section 3 describes the research design and explains

how data were collected and analyzed. Section 4 and Section 5 present the results and the

discussion respectively. Section 6 concludes with theoretical and practical implications of the

paper, as well as acknowledges the limitations and offers possible research directions.

2. Literature Review and Hypotheses Development

2.1. Related Works

Users’ behavioral responses to AI broadly lie on the continuum between automation

bias and AI aversion (Bigman & Gray, 2018; Chong et al., 2022; Dietvorst et al., 2015;

Tomsett et al., 2020; Wickramasinghe et al., 2020). Automation bias occurs when users

readily buy into computer recommendations instead of relying on their own judgment. At the

other end of the spectrum, AI aversion is exhibited when users reject algorithm-generated
advice (Tomsett et al., 2020). Automation bias and AI aversion tendencies could be shaped

by a variety of factors including cognitive load (Parasuram & Manzey, 2010), accountability

in the decision process (Cummings, 2006), and individuals’ level of expertise and training

(Manzey et al., 2012).

Research on factors affecting users’ behavioral responses to AI can be summarized as

those related to system characteristics, user characteristics as well as context characteristics

(Rzepka & Berger, 2018). For example, findings suggest that the more transparent the

reasoning process of the AI system, the more favorable users will judge its decision quality

(Xu et al., 2014), and hence embrace its recommendation. On the other hand, an overly

autonomous AI system that displays a high degree of humanness can threaten, and thus repel

users (Złotowski et al., 2017). Next, the fit between users’ cognitive model and the system

presentation also influences acceptance (Shmueli et al. 2016). In the same way, users’

demographics such as gender and ethnicity that are congruous to system characteristics such

as avatar appearances can lead to positive system perception (Qiu & Benbasat, 2010). On the

context of use, automation bias is more likely to occur for functional tasks that call for logic

whereas AI aversion is triggered in situations that involve making intuitive and emotional

assessments (Gaudiello et al., 2016; Logg, 2017).

System characteristics are not investigated in this paper given that AI systems for

investment are typically opaque to protect their commercial advantage and proprietary rights

(Rudin et al., 2018). User characteristics such as gender, age and investment self-efficacy

(Montford & Goldsmith, 2016) are statistically controlled in testing the hypotheses, which are

proposed subsequently for the development of the conceptual model shown in Figure 1. Risk

level, a salient characteristic in the context of stock market, is incorporated in the

experimental conditions as high-risk and low-risk investments.


2.2. The role of attitude toward AI

Attitude toward any object refers to the mindset of an individual formed by prior

knowledge and experience. It turns into a predisposition for how the individual will value the

object subsequently (Persson et al., 2021). For the purpose of this paper, attitude toward AI

refers to the degree to which one views AI favorably (Lichtenthaler, 2019; Ochmann et al.,

2021). In reality, this attitude varies drastically with ebbs and flows of technological

breakthroughs (Markoff, 2016; Tegmark, 2017; Wu et al., 2020). While some consider AI to

have a positive impact on their everyday lives, others fear that it will result in a loss of their

jobs (Bigman & Gray, 2018; Dietvorst et al., 2015; Tegmark, 2017; Wickramasinghe et al.,

2020).

Prior works have consistently found attitude to be one of the key predictors of

behavioral intention (Pember et al., 2018; Sanakulov & Karjaluoto, 2015; Sanne & Wiese,

2018). This stems from the intrinsic motivation to maintain consistency between attitudes and

behaviors (Gool et al., 2015). Hence, attitude toward AI could potentially shape users’

inclination to accept the usage of AI in everyday life (Lichtenthaler, 2019; Persson et al.,

2021). Those with a favorable attitude toward AI could be more willing to accept AI-based

recommendation in the context of stock market investment than those who view AI with

disdain. Hence, the following is hypothesized:

H1: Attitude toward AI positively predicts behavioral intention to accept AI-based

recommendation.

2.3. The roles of trust and perceived accuracy

For the purpose of this paper, trust refers to users’ willingness to depend on AI for

decision-making based on gut-feeling (Ferrario et al., 2019; Komiak & Benbasat, 2006), and

perceived accuracy is defined as the perception of the extent to which an AI-generated advice
reflects the ideal recommendation free of human biases and errors (Smith & Mentzer, 2010).

Trust and perceived accuracy are important constructs when it comes to stock market

investment. After all, when AI is intended to make predictions, the behavioral intention to

accept machine-generated advice could be largely contingent on users’ trust in AI (Cheng et

al., 2019; Chong et al., 2022; Ho et al., 2017) and perceived accuracy of AI (Jacobsen et al.,

2020; Schaffer et al., 2015). The dependence on trust and perceived accuracy could be further

heightened due to the opaque nature of typical investment-related AI systems (Araujo et al.,

2020; Liang et al., 2021; Rudin et al., 2018).

Given the volatility of the stock market, investors sometimes contend with regret

aversion, which refers to the fear of choosing an option that could turn out to be a bad one

(Berkelaar et al., 2004; Chang et al., 2008; Noah & Lingga, 2020). This leads either to a

preference for inaction (Sautua, 2017) or making the choice more conscientiously to

inoculate against self-blame (Reb, 2008). However, there is scant research hitherto on how

this dilemma plays out when the burden of decision-making is shifted from the self to

technology. Conceivably, when investment decisions are suggested by AI, heightened

vigilance in decision-making could cause investors to either maintain the status quo and

ignore machine-generated advice, or buy into the recommendations if they consider the

technology to be trustworthy and accurate.

Prior research shows that attitude-induced trust promotes behavioral outcomes (Ho et

al., 2017; Nguyen et al., 2019). In a similar way, perceived accuracy, which is related

positively to attitude, could also motivate behavioral intention (Nourani et al., 2019).

Therefore, while a favorable attitude toward AI seems to be positively associated with trust

and perceived accuracy, the opposite can be expected with an unfavorable attitude. Moreover,

greater levels of trust and accuracy seem likely to result in higher behavioral intention to
accept AI-based recommendation and vice-versa (Cheng et al., 2019; Ho et al., 2017;

Jacobsen et al., 2020; Schaffer et al., 2015). Thus, the following hypotheses are proposed:

H2: Attitude toward AI positively predicts trust in AI.

H3: Attitude toward AI positively predicts perceived accuracy of AI.

H4: Trust in AI positively predicts behavioral intention to accept AI-based

recommendation.

H5: Perceived accuracy of AI positively predicts behavioral intention to accept AI-

based recommendation.

2.4. The role of risk level

All investments carry some level of risk. For the purpose of this paper, risk is

conceptualized as volatility which refers to how much the price of a stock fluctuates within a

short timeframe (Schwert, 1989). Investing in blue-chip stocks which are associated with

well-established and financially stable companies is regarded as low risk. Not easily subject

to market speculation, the magnitude for their potential upside and downside is muted in the

short term. On the other hand, investing in penny stocks is regarded as high risk because of

the possible wild gyrations in their stock prices.

Literature on risk taking suggests that the intention to perform a behavior depends on

level of risk involved (Cullen & Gordon, 2007; Sitkin & Pablo, 1992; Sitkin & Weingart,

1995). Investors’ willingness to go for high-risk or low-risk stocks depends on factors such as

investment self-efficacy and the perception of the likelihood of loss (Jasiniak, 2018;

Montford & Goldsmith, 2016). However, there is a dearth of studies on how individuals

decide whether to accept investment recommendations in high-risk and low-risk contexts

when advice comes from AI.


To this end, the conservation of resources theory could be brought to bear as it has

been widely applied to understand how individuals navigate their way through challenging

circumstances (Hobfoll, 1989; 2011). Under stress, the threat of resource loss is viewed more

saliently than the hope of resource gain. Hence, the instinct is to invest resources just to

protect against resource loss. Applying the theory in the context of investment, this means

that the attendant stress of a high-risk situation involving penny stocks may compel

individuals to be more vigilant. Even with a favorable attitude toward AI, they would still

make a careful assessment of their trust in AI and perceived accuracy of AI before deciding

whether to accept the machine-generated advice. In contrast, individuals in a low-risk

situation involving blue-chip stocks would be less dictated by loss aversion tendencies. As

long as they hold a favorable attitude toward AI, they would be willing to accept the

machine-generated advice, regardless of their trust in AI and perceived accuracy of AI.

For these reasons, risk level is expected to play a moderating role among attitude,

trust and perceived accuracy in their relationships with intention. In particular, the heightened

vigilance triggered under a high-risk investment situation could lead to stronger relations

between trust and intention as well as perceived accuracy and intention. This has the

inadvertent effect of weakening the relationship between attitude and intention. In other

words, the attitude-intention relationship can be expected to be stronger under a low-risk

investment situation. Therefore, the following hypotheses are posited:

H6(a): Risk level moderates the relation between attitude toward AI and behavioral

intention to accept AI-based recommendation. The relation is stronger in the low-risk

situation involving blue-chip stocks.

H6(b): Risk level moderates the relation between trust in AI and behavioral intention

to accept AI-based recommendation. The relation is stronger in the high-risk situation

involving penny stocks.


H6(c): Risk level moderates the relation between perceived accuracy of AI and

behavioral intention to accept AI-based recommendation. The relation is stronger in the high-

risk situation involving penny stocks.

Figure 1. Attitude-perception-intention (API) model of AI acceptance.

3. Methods

3.1. Research Design

A scenario-based between-participants online experiment was conducted to test the

hypotheses in the proposed API model of AI acceptance. Two experimental conditions were

set up to manipulate the level of risk. One induced low-risk investment with

recommendations for blue-chip stocks while the other induced high-risk investment with

recommendations for penny stocks.

Prior to the experiment, a pilot study was conducted for the purpose of manipulation

check. A total of 10 participants selected using convenience sampling were asked to rate the

level of risk associated with the two scenarios (Figure 2 and Figure 3) as either high or low.
There was unanimous agreement that both experimental conditions reflected their intended

risk levels.

3.2. Data Collection Procedure

Participants were recruited based on a combination of purposive and snowball

sampling. The inclusion criterion was that they must have prior experiences with stock

market investment. Data collection proceeded through the following two steps. First, after

informed consent was obtained, participants responded to a screening question to confirm

they had previously invested in the stock market. They also completed a short questionnaire

to provide demographic data and indicate their investment self-efficacy. Thereafter, they

were asked to imagine they were investors looking to increase their portfolio and were

introduced to SMART-AI-TRADER, a simulated AI system created for this study.

Participants were told that it uses a proprietary AI algorithm that learns from stocks’

fundamentals, price and volume history to provide unbiased advice to investors. The system

has recommended Stock A.

In the second step, participants were randomly and evenly assigned to one of the two

experimental conditions using Qualtrics randomizer, with descriptions of either blue-chip or

penny stocks provided. This was to ensure they understood the level of risk the stock carried.

Participants were then exposed to the AI-based recommendation. Shown in Figure 2,

SMART-AI-TRADER has provided a BUY recommendation for a blue-chip stock. Figure 3

shows a BUY recommendation for a penny stock. After that, participants were asked to

indicate their intentions to accept AI-based recommendation. Finally, they responded to a set

of questionnaire items measuring their trust in AI, perceived accuracy of AI, and attitude

toward AI. All items followed a seven-point Likert scale (1=strongly disagree, 7=strongly

agree).
Figure 2: Experimental stimulus depicting low-risk investment recommendation.
Figure 3: Experimental stimulus depicting high-risk investment recommendation.

3.3. Measures

Participants’ gender, age, and investment self-efficacy were used as control variables

in all analyses. Gender was captured as either male or female. Age was captured in years.

Investment self-efficacy was measured using items adapted from Montford and Goldsmith
(2016). The final dependent variable in the API model shown in Figure 1 is behavioral

intention to accept AI-based recommendation. This was measured using items adapted from

Gursoy et al. (2019). Attitude toward AI is the independent variable in the conceptual model.

It was measured using items adapted from Belanche et al. (2019). The other two variables in

the model include trust in AI and perceived accuracy of AI. These were measured using items

adapted from Jamaludin and Ahmad (2013) and Gursoy et al. (2019) respectively. The

questionnaire items for each of the constructs are listed in Table 1.

Table 1: Questionnaire items for the constructs.

Constructs Questionnaire Items


Investment self-efficacy Item 1: I believe I have the required skills and knowledge
(Montford & Goldsmith, in making stock investment decisions.
2016) Item 2: I rely on my previous experiences in making
stock investment decisions for my next investment.
Item 3: I am able to analyze stock prices reasonably well
based on my own knowledge, skills and abilities.
Behavioral intention to accept Item 1: I would like to follow the call based on AI
AI-based recommendation recommendation.
(Gursoy et al. 2019) Item 2: I intend to accept the call based on AI
recommendation.
Item 3: I would prefer to follow the call based on AI
recommendation.
Attitude toward AI Item 1: Using AI-based recommendation systems for
(Belanche et al., 2019) making investment decisions is a good idea.
Item 2: Using AI-based recommendation systems for
making investment decisions is a wise idea.
Item 3: I am open to use AI-based recommendation
systems for making investment decisions.
Trust in AI Item 1: I believe AI-based recommendation systems are
(Jamaludin & Ahmad, 2013) trustworthy.
Item 2: I believe AI-based recommendation systems are
reliable.
Item 3: AI-based recommendation systems cannot be
trusted, there are too many uncertainties. (R)
Perceived accuracy of AI Item 1: AI-based recommendation systems are more
(Gursoy et al. 2019) accurate than human beings.
Item 2: AI-based recommendation systems are not affected
by human errors.
Item 3: AI-based recommendation systems are more
consistent than human beings.
3.4. Data Analyses

Data were analyzed using partial least squares structural equation modeling (PLS-

SEM). To ensure reliability of the measures, Cronbach’s alpha and composite reliability were

used. Validity was checked in terms of convergent validity and discriminant validity.

Common method bias was tested using Harman’s one-factor test. It included all items in a

principal component factor analysis (Podsakoff & Organ, 1986; Shiau & Luo, 2012). More

than one factor emerged, indicating that common method bias was not a problem. The

assessment of the structural model included the coefficient of determination (R2), and the

cross-validated redundancy measure (Q2).

To examine the moderating effect of risk level, a multi-group analysis was conducted

to compare data from the two experimental conditions of low-risk and high-risk investment

recommendations. Measurement invariance was tested. As reported in Appendix A, the

loadings between the latent variables and their indicators were similar for both the groups,

allowing for a meaningful cross-group analysis. Thereafter, the group comparison method

was applied to identify if the standardized path coefficients for the two groups of participants

differed significantly (Keil et al., 2000). The roles of gender, age and investment self-efficacy

were controlled in all the PLS-SEM analyses. In particular, the three control variables were

added by connecting them to the main endogenous variable (users’ intention to accept AI-

based recommendation).

4. Results

4.1. Sample, Measurement Evaluation and Descriptive Statistics

An initial pool of 416 participants were invited to this study. Of these, 16 participants

did not respond to the invitation, 19 did not pass the screening check as they had never

invested in the stock market, and 13 dropped midway. Complete responses from 368 (416 -
16 - 19 - 13) participants were thus admitted for analysis. Such a sample size is comparable

with recent studies (Shin, 2020; Williams, 2021).

Specifically, 191 participants were assigned to the low-risk investment condition

while 190 were assigned to the high-risk investment condition. Eight from the first condition

and five from the second dropped midway. The final tallies were 183 participants in the low-

risk investment condition and 185 in the high-risk investment condition.

In terms of demographics, 213 (57.9%) were male and 155 (42.1%) were female. The

average age was 31.77 years (Min = 21, Max = 63, SD = 10.80). In terms of educational

qualification, 164 (44.6%) participants had a bachelor’s degree, 144 (39.1%) had a master’s

degree, 23 (6.3%) had ‘O’ or ‘A’ level qualifications, 21 (5.7%) were at diploma/advanced

diploma level, and the other 16 (4.3%) participants had a doctoral degree. In terms of

participants’ experience in the stock market investment, 97 (26.4%) had less than one-year

experience, 126 (34.2%) had one year to less than three years of experience, 97 (26.4%) had

three years to less than six years of experience, and 48 (13%) had greater than six years of

experience. Table 2 presents the descriptive statistics of the sample.

Table 2: Descriptive statistics of the sample.


Constructs Full Dataset Low-Risk Level High-Risk Level
(N = 368) (n = 183) (n = 185)
Gender (frequency)
Male 213 (57.9%) 107 (58.5%) 106 (57.3%)
Female 155 (42.1%) 76 (41.5%) 79 (42.7%)

Age (M ± SD) 31.77 ± 10.80 34.33 ± 11.68 29.24 ± 9.19

Education (frequency)
'O' or 'A' Levels 23 (6.3%) 0 (0%) 23 (12.4%)
Diploma/Advanced Diploma 21 (5.7%) 5 (2.7%) 16 (8.6%)
Bachelor 164 (44.6%) 82 (44.8%) 82 (44.3%)
Master 144 (39.1%) 88 (48.1%) 56 (30.3%)
Doctoral 16 (4.3%) 8 (4.4%) 8 (4.3%)

Investment experience (frequency)


< 1 year 97 (26.4%) 50 (27.3%) 47 (25.4%)
1 year to less than 3 years 126 (34.2%) 78 (42.6%) 48 (25.9%)
3 years to less than 6 years 97 (26.4%) 26 (14.2%) 71 (38.4%)
>= 6 years 48 (13%) 29 (15.8%) 19 (10.3%)

Investment self-efficacy (M ± SD) 3.89 ± 1.41 4.08 ± 1.42 3.70 ± 1.38

Behavioral intention to accept AI- 4.34 ± 1.65 4.85 ± 1.47 3.84 ± 1.66
based recommendation (M ± SD)

Attitude toward AI (M ± SD) 4.20 ± 1.46 4.42 ± 1.33 3.98 ± 1.56

Trust in AI (M ± SD) 4.03 ± 1.32 4.10 ± 1.29 3.95 ± 1.34

Perceived accuracy of AI (M ± SD) 3.85 ± 1.54 3.84 ± 1.64 3.86 ± 1.44

Cronbach’s Alpha (α), composite reliability (CR), and average variance extracted

(AVE) for all the constructs are reported in Table 3. The Cronbach’s α values exceeded the

threshold of 0.7, confirming internal consistency of the measures (Nunnally, 1978). All CR

and AVE values exceeded 0.7 and 0.5 respectively, indicating acceptable convergent validity

(Fornell & Larcker, 1981). Moreover, all items loaded on their respective constructs as shown

in Table 4. Thus, discriminant validity was confirmed.

Table 3: Internal consistency reliability and convergent validity.

Constructs Cronbach’s CR AVE


α
Investment self-efficacy 0.86 0.91 0.78
Behavioral intention to accept AI-based recommendation 0.89 0.93 0.82
Attitude toward AI 0.94 0.96 0.90
Trust in AI 0.79 0.87 0.70
Perceived accuracy of AI 0.87 0.92 0.79

Table 4: Item loadings and cross loadings.


(1) (2) (3) (4) (5)
Investment Behavioral Attitude Trust Perceived
self-efficacy intention to toward in AI accuracy
accept AI-based AI of AI
Constructs Items recommendation
Item 1 0.89 0.31 0.44 0.30 0.30
(1) Item 2 0.85 0.32 0.51 0.32 0.31
Item 3 0.88 0.28 0.38 0.20 0.26
Item 1 0.32 0.94 0.53 0.37 0.33
(2) Item 2 0.33 0.96 0.55 0.40 0.35
Item 3 0.35 0.95 0.54 0.39 0.34
Item 1 0.47 0.54 0.94 0.63 0.65
(3) Item 2 0.50 0.52 0.94 0.68 0.65
Item 3 0.45 0.54 0.93 0.67 0.61
Item 1 0.39 0.41 0.71 0.89 0.72
(4) Item 2 0.30 0.36 0.61 0.90 0.61
Item 3 0.01 0.23 0.47 0.72 0.47
Item 1 0.31 0.37 0.63 0.70 0.92
(5) Item 2 0.22 0.27 0.53 0.59 0.87
Item 3 0.35 0.33 0.67 0.69 0.93
Note. The bolded values indicate the loading of each item to a construct in the respective
columns. The other values indicate the cross loadings.

4.2. Inferential Statistics

As described in Section 3.4, each of the hypotheses was tested using PLS-SEM. The

statistical significance of the path coefficients was assessed. The control variables (gender,

age, and investment self-efficacy) were consistently non-significant (gender: β = -0.03, t =

0.34, p > 0.05; age: β = -0.004, t = 0.05, p > 0.05; self-efficacy: β = 0.08, t = 0.77, p > 0.05).

After accounting for the control variables, the following hypothesized relationships

were found to be significant: Attitude toward AI was positively associated with behavioral

intention to accept AI-based recommendation (β = 0.54, t = 3.62, p < 0.001). This lends

support to H1. Next, attitude toward AI was positively associated with trust in AI (β = 0.71, t

= 10.42, p < 0.001) and perceived accuracy of AI (β = 0.68, t = 10.13, p < 0.001), which lend

support to H2 and H3 respectively.

However, the relationships of trust and perceived accuracy with behavioral intention

to accept AI-based recommendation were not significant. Therefore, H4 and H5 are not

supported. Table 5 summarizes the results of testing the hypotheses H1-H5 using PLS-SEM.

Table 5: Hypotheses testing results for H1-H5.


Full dataset (N=368)
β Std. t-stat
Error
H1: Attitude toward AI → Behavioral intention to accept 0.54 0.15 3.62***
AI-based recommendation
H2: Attitude toward AI → Trust in AI 0.71 0.07 10.42***
H3: Attitude toward AI → Perceived accuracy of AI 0.68 0.07 10.13***
H4: Trust in AI → Behavioral intention to accept AI-based 0.06 0.14 0.40
recommendation
H5: Perceived accuracy of AI → Behavioral intention to -0.08 0.15 0.53
accept AI-based recommendation

R2 Value
Trust in AI 50.4%
Perceived accuracy of AI 46.2%
Behavioral intention to accept AI-based recommendation 33.5%
Note. * p < 0.05, ** p < 0.01, *** p < 0.001.
Control variables: Gender, Age, Investment self-efficacy.

The R2 values for the endogenous constructs including trust in AI, perceived accuracy

of AI, and behavioral intention to accept AI-based recommendation were 50.4%, 46.2% and

33.5% respectively. The cross-validated redundancy measure (Q2) was also examined. With

an omission distance of seven, the positive Q2 (Q2 > 0) values for the endogenous constructs

ensured that the model fit well with the data (Hair et al., 2019).

To test the moderating effect of risk level, a multi-group PLS analysis was conducted.

Statistical tests were performed to check the homogeneity of the two groups in terms of the

control variables of gender, age, and investment self-efficacy. With respect to gender, Chi-

square results indicated no significant difference (χ2(1, N = 368) = 0.05, Cramer’s V = 0.01,

p > 0.05). With respect to age, there was a significant difference between the two groups;

t(345.03) = 4.65, p < 0.01. Participants’ age in the low-risk condition (34.33 ± 11.68) was

significantly higher than that in the high-risk condition (29.24 ± 9.19). With respect to

investment self-efficacy, there was a statistically significant difference between the two

groups; t(366) = 2.66, p < 0.01. Participants’ investment self-efficacy in the low-risk

condition (4.08 ± 1.42) was significantly higher than that in the high-risk condition (3.70 ±

1.38). That said, the control variables remained consistently non-significant in the high-risk
condition (gender: β = 0.03, t = 0.44, p > 0.05; age: β = 0.02, t = 0.4, p > 0.05; self-efficacy: β

= 0.11, t = 1.2, p > 0.05) as well as the low-risk condition (gender: β = -0.1, t = 1.04, p >

0.05; age: β = -0.03, t = 0.3, p > 0.05; self-efficacy: β = 0.01, t = 0.1, p > 0.05). The results of

the API model for the low-risk and the high-risk conditions are depicted in Figure 4 and

Figure 5 respectively.

Figure 4. Path coefficients for the low-risk condition.


Figure 5. Path coefficients for the high-risk condition.

As shown in Table 6, the group comparison method showed a significant difference

between the two groups for the relationship between attitude toward AI and behavioral

intention to accept AI-based recommendation. Compared with the participants in the high-

risk situation, those in the low-risk situation showed a stronger relation (t = 4.08, p < 0.001).

This lends support to H6(a).

Furthermore, there was a significant difference between the two groups for the

relationship between trust in AI and behavioral intention to accept AI-based recommendation.

Compared with the participants in the low-risk situation, those in the high-risk situation

showed a stronger relation (t = -20.54, p < 0.001). Hence, H6(b) is supported.

Finally, there was also a significant difference between the two groups for the

relationship between perceived accuracy of AI and behavioral intention to accept AI-based


recommendation. Compared with the participants in the low-risk situation, those in the high-

risk situation showed a stronger relation (t = -24.79, p < 0.001). This lends support to H6(c).

Table 6: PLS multi-group results for H6.


Low-risk High-risk
(blue-chip: n=183) (penny: n=185) t-stat
β Std. Error β Std. Error
H6(a): Attitude toward AI → 0.45** 0.14 0.39** 0.12 4.08***
Behavioral intention to accept AI-
based recommendation
H6(b): Trust in AI → Behavioral -0.05 0.16 0.21* 0.10 -20.54***
intention to accept AI-based
recommendation
H6(c): Perceived accuracy of AI → -0.16 0.16 0.19 0.12 -24.79***
Behavioral intention to accept AI-
based recommendation

R2 Value
Trust in AI 46% 56.3%
Perceived accuracy of AI 43.3% 53.7%
Behavioral intention to accept AI- 12.2% 62%
based recommendation

Note. * p < 0.05, ** p < 0.01, *** p < 0.001.


Control variables: Gender, Age, Investment self-efficacy.

5. Discussion

Four major findings could be gleaned from this research. First, based on the results

corresponding to H1, attitude toward AI was positively associated with behavioral intention

to accept AI-based recommendations (β = 0.54, p < 0.001). Although recent evidence

suggests that the attitude toward AI could be less favorable for black-box vis-à-vis

transparent systems (Ochmann et al., 2021), this paper reveals that users’ attitude still plays a

crucial role in the case of opaque AI systems. As long as they hold a favorable attitude

toward AI systems, users seem to accept their inability to understand the underlying

computational complexities.
Second, from the results corresponding to H2 and H3, attitude toward AI was

positively associated with trust in AI (β = 0.71, p < 0.001) and perceived accuracy of AI (β =

0.68, p < 0.001). This is generally consistent with long-standing research findings (e.g.,

Dwivedi et al. 2019; Venkatesh et al., 2003) that attitude is not only a key predictor of

embracing technology but also shapes trust and perceived accuracy of what technology can

offer. This persistent importance of attitude has implications for research in human-AI

interaction. Going forward, as AI becomes more pervasive, it is important for public debate

surrounding AI to avoid veering toward either exaggerated optimism or helpless pessimism.

Neither automation bias nor AI aversion is helpful to society (Bigman & Gray, 2018; Chong

et al., 2022; Dietvorst et al., 2015; Tomsett et al., 2020; Wickramasinghe et al., 2020).

Instead, it would be wise to focus realistically on what AI can do, appreciate its potential, and

acknowledge its limits.

Third, the results corresponding to H4 and H5 show that neither trust in AI nor

perceived accuracy of AI was significantly associated with behavioral intention to accept AI-

based recommendations in the full sample. This is at odds with prior research (Ho et al.,

2017; Liu & Tao, 2022; Schaffer et al., 2015) and could be attributed to the unique context of

investigation of investment recommendation involving blue-chip and penny stocks, which

has not been explored hitherto. Thus, the paper not only expands the contextual scope of the

human-AI interaction literature but also enriches it with a counter-intuitive finding that

warrants further inquiry. Future research is needed to shed light on how perception-related

constructs such as trust in AI and perceived accuracy of AI hold different connotations in

different contexts.

Fourth, from the results corresponding to H6, risk level moderated how attitude, trust

and perceived accuracy varied with behavioral intention to accept AI-based

recommendations. In particular, trust (t = -20.54, p < 0.001) and perceived accuracy (t = -


24.79, p < 0.001) were found to better explain AI acceptance intention in high risk rather than

low risk situations. It is evident that the forces affecting users’ decision to embrace AI are

contextually dependent on the level of risk (Rzepka & Berger, 2018).

Prior research suggests that users tend to rely on automation for tasks that call for

logic (Gaudiello et al., 2016; Logg, 2017). Extending the literature, this paper shows that

even for a task such as investment decision-making that may also involve intuition, users

could be open to AI-based recommendations. However, the underlying psychological

mechanism of accepting machine-generated advice depends on the level of risk. When risk is

low, a favourable attitude toward AI seems sufficient to promote machine reliance. However,

when risk is high, a favourable attitude toward AI is a necessary but no longer sufficient

condition for AI acceptance. Instead, to cope with the risk, users carefully deliberate on their

trust and perceived accuracy of AI before deciding whether to accept machine-generated

advice. In other words, compared with the low-risk condition involving blue-chip stocks, the

high-risk condition involving penny stocks compelled the participants to be more vigilant in

their decision-making.

6. Conclusion

This paper seeks to explain the behavioral intention to accept AI-based

recommendations as a function of attitude toward AI, trust, perceived accuracy and risk level.

A conceptual model was proposed and tested through a between-participants experiment

using a simulated AI-enabled investment recommendation system. The results reveal that

attitude toward AI is positively associated with behavioral intention to accept AI-based

recommendations, trust in AI and perceived accuracy of AI. Additionally, risk level

moderates how attitude, trust and perceived accuracy vary with behavioral intention to accept

AI-based recommendations.
6.1. Theoretical Contributions

On the theoretical front, the paper contributes to the human-AI interaction literature in

three ways. First, it proposes an attitude-perception-intention (API) model that sheds light on

the underlying psychological mechanism of how users decide to accept AI-enabled advice.

The model enhances current understanding of the relation between attitude toward AI and

behavioral intention to accept AI-based recommendation (Ho et al., 2017; Liu & Tao, 2022;

Schaffer et al., 2015) by taking into account trust, perceived accuracy and risk level. It shows

users’ decision to embrace AI is contextually-dependent (Rzepka & Berger, 2018), and

specifically, on the level of risk. When risk is low, a favourable attitude toward AI is enough.

However, when risk is high, a favourable attitude alone is no longer sufficient for AI

acceptance. In a state of heightened alert, users become more careful in assessing their trust

in AI and their perceived accuracy of AI before deciding to accept AI-based

recommendations. Put differently, the API model not only deepens the understanding of the

attitude-intention relation in the AI landscape but also adds risk-level as a boundary

condition.

Two, the paper adds to the scholarly understanding of AI recommendation systems in

tasks that call for intuition in finance—an example of a high involvement service—where

human counsel is usually preferred to machine-generated advice (Longoni et al., 2019; Zhang

et al., 2021). Prior research suggests that users readily accept AI especially when dealing with

rule-based and routine work (Gaudiello et al., 2016; Logg, 2017). Extending the literature,

this paper argues that users are also amenable to AI-based recommendations for tasks such as

making investment decisions that demand intuitive judgements. Depending on attitude, trust,

perceived accuracy and risk level, there could be a case for AI acceptance.
Three, this paper represents one of the earliest attempts to apply the conservation of

resources theory in the context of stock market investment. It validates the argument that the

threat of resource loss is viewed saliently in challenging circumstances involving penny

stocks (Hobfoll, 1989; 2011). On the other hand, when investing in blue-chip stocks where

the threat of resource loss is perceived to be minimal, users tend to let their guard down in

making decisions. Additionally, this paper adds to the literature on risk (Bao et al., 2022) by

showing how the level of risk plays a moderating role in AI acceptance. Specifically, in a

high-risk situation, high trust and perceived accuracy are needed for users to buy into AI-

based recommendations.

6.2. Practical Implications

On the practical front, the paper offers insights into how the uptake of AI

recommendation systems can be promoted in high involvement industries such as healthcare

and finance where machine-generated advice has received much resistance (Longoni et al.,

2019; Zhang et al., 2021). As new AI recommendation systems proliferate, it is important for

policymakers to ensure that the public develops a realistic attitude toward AI.

Furthermore, marketing communication for AI recommendation systems should be

tailored according to the decision-making context. For example, in situations where there is

high risk, successful performance of the systems in the past could be recounted to inspire user

confidence. AI systems offering recommendations under high risk should be designed in

ways so as to enhance perceptions of trust and accuracy.

6.3. Limitations and Future Research Directions

Two limitations in this paper need to be acknowleged. One, as with all quantitative

studies, it was not possible to gain richer insights into how the participants made decisions
whether to accept AI-enabled advice. Future research could build on the proposed API model

by using interviews or focus groups to identify other constructs that further explain the

relationship between attitude toward AI and behavioral intention to accept AI.

Another limitation is the methodological parsimony of the experimental setup. No

amount of investable assets was specified in the experiment. Neither were participants

presented with scenarios where an investment portfolio could comprise both high-risk and

low-risk stocks in different proportions. Hence, future research could consider refining the

experiment to reflect a more realistic context under which investment decisions are made.

Hopefully, this will deepen our understanding of how users decide whether to embrace AI.
References
Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In J. Kuhl & J.
Beckman (Eds.), Action-control: From cognition to behavior (pp. 11–39). Heidelberg:
Springer.

Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior.
Englewood Cliffs, NJ: Prentice-Hall.

Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust?
Perceptions about automated decision-making by artificial intelligence. AI & Society,
35(3), 611-623.

Ashoori, M., & Weisz, J. D. (2019). In AI we trust? Factors that influence trustworthiness of
AI-infused decision-making processes. arXiv preprint. Retrieved from
https://arxiv.org/abs/1912.02675

Bao, L., Krause, N. M., Calice, M. N., Scheufele, D. A., Wirz, C. D., Brossard, D., ... &
Xenos, M. A. (2022). Whose AI? How different publics think about AI and its social
impacts. Computers in Human Behavior, 130, 107182.

Belanche, D., Casaló, L.V. & Flavián, C. (2019). Artificial Intelligence in FinTech:
understanding robo-advisors adoption among customers. Industrial Management &
Data Systems, 119(7), 1411-1430.

Berkelaar, A. B., Kouwenberg, R., & Post, T. (2004). Optimal portfolio choice under loss
aversion. Review of Economics and Statistics, 86(4), 973-987.

Bickmore, T. W., Trinh, H., Olafsson, S., O'Leary, T. K., Asadi, R., Rickles, N. M., & Cruz,
R. (2018). Patient and consumer safety risks when using conversational assistants for
medical information: an observational study of Siri, Alexa, and Google Assistant.
Journal of Medical Internet Research, 20(9), e11510.

Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions.
Cognition, 181, 21–34.

Chang, M., Ng, J., & Yu, K. (2008). The influence of analyst and management forecasts on
investor decision making: An experimental approach. Australian Journal of
Management, 33(1), 47-67.

Cheng, X., Guo, F., Chen, J., Li, K., Zhang, Y., & Gao, P. (2019). Exploring the trust
influencing mechanism of robo-advisor service: A mixed method approach.
Sustainability, 11(18), Article 4917.

Choe, Y. C., Park, J., Chung, M., & Moon, J. (2009). Effect of the food traceability system
for building trust: Price premium and buying behavior. Information Systems
Frontiers, 11(2), 167-179.
Chong, L., Zhang, G., Goucher-Lambert, K., Kotovsky, K., & Cagan, J. (2022). Human
confidence in artificial intelligence and in themselves: The evolution and impact of
confidence on adoption of AI advice. Computers in Human Behavior, 127, 107018.

Cullen, J. B., & Gordon, R. H. (2007). Taxes and entrepreneurial risk-taking: Theory and
evidence for the US. Journal of Public Economics, 91(7-8), 1479-1505.

Cummings, M. L. (2006). Automation and accountability in decision support system interface


design. Journal of Technology Studies, 32(1), 23–31.

Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital
Journalism, 5(7), 809-828.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People
erroneously avoid algorithms after seeing them err. Journal of Experimental
Psychology: General, 144(1), 114–126.

Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re-
examining the unified theory of acceptance and use of technology (UTAUT):
Towards a revised theoretical model. Information Systems Frontiers, 21(3), 719-734.

Ferrario, A., Loi, M., & Viganò, E. (2019). In AI we trust Incrementally: A multi-layer model
of trust to analyze Human-Artificial intelligence interactions. Philosophy &
Technology, 1-17.

Fornell, C., & Larcker, D. (1981). Structural equation models with unobserved variables and
measurement error: Algebra and Statistics, Journal of Marketing Research, 18(3),
382-388.

Fortune Business Insights. (2020). Technology & media: Artificial intelligence market.
Fortune Business Insights. Retrieved from
https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-
market-100114

Gaudiello, I., Zibetti, E., Lefort, S., Chetouani, M., & Ivaldi, S. (2016). Trust as indicator of
robot functional and social acceptance. An experimental study on user conformation
to iCub answers. Computers in Human Behavior, 61, 633-655.

Gool, E. V., Ouytsel, J. V., Ponnet, K., & Walrave, M. (2015). To share or not to share?
Adolescents’ self-disclosure about peer relationships on Facebook: An application of
the prototype willingness model. Computers in Human Behavior, 44, 230-239.

Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumer’s acceptance of artificially
intelligent (AI) device use in service delivery. International Journal of Information
Management, 49, 157-169.

Guszcza, J., Lewis, H., & Evans-Greenwood, P. (2017). Cognitive collaboration: Why
humans and computers think better together. Deloitte Review, 20, 8-29.
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report
the results of PLS-SEM. European Business Review, 31(1), 2-24.

Ho, S. M., Ocasio-Velázquez, M., & Booth, C. (2017). Trust or consequences? Causal effects
of perceived risk and subjective norms on cloud technology adoption. Computers &
Security, 70, 581-595.

Hobfoll, S. E. (1989). Conservation of resources: A new attempt at conceptualizing stress.


American Psychologist, 44(3), 513-524.

Hobfoll, S. E. (2011). Conservation of resource caravans and engaged settings. Journal of


Occupational and Organizational Psychology, 84(1), 116-122.

Jacobsen, R. M., Bysted, L., Johansen, P. S., Papachristos, E., & Skov, M. B. (2020).
Perceived and measured task effectiveness in human-AI collaboration. In Extended
Abstracts of the Conference on Human Factors in Computing Systems (pp. 1-9).
ACM.

Jamaludin, A., & Ahmad, F. (2013). Investigating the relationship between trust and intention
to purchase online. Business and Management Horizons, 1(1), 1-9.

Jasiniak, M. (2018). Determinants of investment decisions on the capital market. Financial


Internet Quarterly, 14(2), 1-8.

Jha, S., & Topol, E. J. (2016). Adapting to artificial intelligence: Radiologists and
pathologists as information specialists. Jama, 316(22), 2353-2354.

Keil, M., Tan, B., Wei, K. K., & Saarinen, T. (2000). A cross-cultural study on escalation of
commitment behavior in software projects. MIS Quarterly, 24(2), 299-325.

Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2018). Human
decisions and machine predictions. The Quarterly Journal of Economics, 133(1), 237-
293.

Komiak, S., & Benbasat, I. (2006). The effects of personalization and familiarity on trust and
adoption of recommendation agents. MIS Quarterly, 30(4), 941-960.

Lai, J. Y. (2009). How reward, computer self‐efficacy, and perceived power security affect
knowledge management systems success: An empirical investigation in high‐tech
companies. Journal of the American Society for Information Science and Technology,
60(2), 332-347.

Li, N. L., & Zhang, P. (2005). The intellectual development of human-computer interaction
research: A critical assessment of the MIS literature (1990-2002). Journal of the
Association for information Systems, 6(11), Article 9.

Liang, T., Robert, L., Sarker, S., Cheung, C. M., Matt, C., Trenz, M., & Turel, O. (2021).
Artificial intelligence and robots in individuals' lives: how to align technological
possibilities and ethical issues. Internet Research, 31(1), 1-10.
Lichtenthaler, U. (2019). Extremes of acceptance: Employee attitudes toward artificial
intelligence. Journal of Business Strategy, 41(5), 39-45.

Lim, N. (2003). Consumers' perceived risk: Sources versus consequences. Electronic


Commerce Research and Applications, 2(3), 216-228.

Liu, K., & Tao, D. (2022). The roles of trust, personalization, loss of privacy, and
anthropomorphism in public acceptance of smart healthcare services. Computers in
Human Behavior, 127, 107026.

Logg, J. M. (2017). Theory of machine: When do people rely on algorithms? Harvard


Business School working paper series# 17-086. Retrieved from
https://dash.harvard.edu/handle/1/31677474

Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial
intelligence. Journal of Consumer Research, 46(4), 629-650.

Lu, L., Cai, R., & Gursoy, D. (2019). Developing and validating a service robot integration
willingness scale. International Journal of Hospitality Management, 80, 36-51.

Manzey, D., Reichenbach, J., & Onnasch, L. (2012). Human performance consequences of
automated decision aids: The impact of degree of automation and system experience.
Journal of Cognitive Engineering and Decision Making, 6(1), 57-87.

Markoff, J. (2016). Machines of loving grace: The quest for common ground between
humans and robots. Harper Collins Publishers.

Montford, W., & Goldsmith, R. E. (2016). How gender and financial self‐efficacy influence
investment risk taking. International Journal of Consumer Studies, 40(1), 101-106.

Nguyen, T. T. H., Nguyen, N., Nguyen, T. B. L., Phan, T. T. H., Bui, L. P., & Moon, H. C.
(2019). Investigating consumer attitude and intention towards online food purchasing
in an emerging economy: An extended TAM approach. Foods, 8(11), Article 576.

Noah, S., & Lingga, M. T. P. (2020). The Effect of Behavioral Factors in Investor’s
Investment Decision. In Conference Series (Vol. 3, No. 1, pp. 398-413).

Nourani, M., Kabir, S., Mohseni, S., & Ragan, E. D. (2019). The effects of meaningful and
meaningless explanations on trust and perceived system accuracy in intelligent
systems. In Proceedings of the AAAI Conference on Human Computation and
Crowdsourcing (Vol. 7, No. 1, pp. 97-105).

Nunnally, J.C. (1978). Psychometric theory, 2nd ed. New York, NY: McGraw-Hill.

Ochmann, J., Zilker, S., & Laumer, S. (2021). The evaluation of the black box problem for
AI-based recommendations: An interview-based study. In International Conference
on Wirtschaftsinformatik (pp. 232-246). Springer, Cham.
Pavlou, P. A., & Fygenson, M. (2006). Understanding and predicting electronic commerce
adoption: An extension of the theory of planned behavior. MIS Quarterly, 30(1), 115-
143.

Pember, S. E., Zhang, X., Baker, K., & Bissell, K. (2018). Application of the theory of
planned behavior and uses and gratifications theory to food-related photo-sharing on
social media. Californian Journal of Health Promotion, 16(1), 91-98.

Persson, A., Laaksoharju, M., & Koga, H. (2021). We mostly think alike: Individual
differences in attitude towards AI in Sweden and Japan. The Review of Socionetwork
Strategies, 15(1), 123-142.

Qiu, L., and Benbasat, I. (2010). A Study of Demographic Embodiments of Product


Recommendation Agents in Electronic Commerce. International Journal of Human-
Computer Studies, 68(10), 669-688.

Reb, J. (2008). Regret aversion and decision process quality: Effect of regret salience on
decision process carefulness. Organizational Behavior and Human Decision
Processes. 105(2), 169-182.

Rudin, C., Wang, C., & Coker, B. (2018). The age of secrecy and unfairness in recidivism
prediction. arXiv preprint arXiv:1811.00731. Retrieved from
https://arxiv.org/abs/1811.00731

Rzepka, C., & Berger, B. (2018). User Interaction with AI-enabled Systems: A Systematic
Review of IS Research. Thirty Ninth International Conference on Information
Systems, Article 7.

Sanakulov, N., & Karjaluoto, H. (2015). Consumer adoption of mobile technologies: a


literature review. International Journal of Mobile Communications, 13(3), 244-275.

Sanne, P. N., & Wiese, M. (2018). The theory of planned behaviour and user engagement
applied to Facebook advertising. South African Journal of Information Management,
20(1), 1-10.

Sautua, S. (2017). Does risk cause inertia in decision making? An experimental study of the
role of regret aversion and indecisiveness? Journal of Economic Behavior &
Organization, 136, 1-14.

Schaffer, J., Hollerer, T., & O'Donovan, J. (2015). Hypothetical recommendation: A study of
interactive profile manipulation behavior for recommender systems. In Proceedings of
the International Florida Artificial Intelligence Research Society Conference (pp.
507-512). AAAI.

Schwert, G. W. (1989). Why does stock market volatility change over time? The Journal of
Finance, 44(5), 1115-1153.

Shin, D. (2020). How do users interact with algorithm recommender systems? The interaction
of users, algorithms, and performance. Computers in Human Behavior, 109, 106344.
Sloane, E. B., & Silva, R. J. (2020). Artificial intelligence in medical devices and clinical
decision support systems. In Clinical Engineering Handbook (pp. 556-568). Academic
Press.

Shmueli, L., Benbasat, I., & Cenfetelli, R. T. (2016). A construal-level approach to


persuasion by personalization. In Proceedings of the International Conference on
Information Systems (pp. 1799-1817). AIS.

Sitkin, S. B., & Pablo, A. L. (1992). Reconceptualizing the determinants of risk behavior.
Academy of Management Review, 17(1), 9-38.

Sitkin, S. B., & Weingart, L. R. (1995). Determinants of risky decision-making behavior: A


test of the mediating role of risk perceptions and propensity. Academy of Management
Journal, 38(6), 1573-1592.

Smith, C. D., & Mentzer, J. T. (2010). User influence on the relationship between forecast
accuracy, application and logistics performance. Journal of Business Logistics, 31(1),
159-177.

Sun, C. (2020). Research on investment decision-making model from the perspective of


“Internet of Things+ Big data”. Future Generation Computer Systems, 107, 286-292.

Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.

Tomsett, R., Preece, A., Braines, D., Cerutti, F., Chakraborty, S., Srivastava, M., ... &
Kaplan, L. (2020). Rapid trust calibration through interpretable and risk-aware AI.
Patterns, 1(4), Article 100049.

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of
information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

Waweru, N. M., Munyoki, E., & Uliana, E. (2008). The effects of behavioural factors in
investment decision-making: a survey of institutional investors operating at the
Nairobi Stock Exchange. International Journal of Business and Emerging Markets,
1(1), 24-41.

Wickramasinghe, C. S., Marino, D. L., Grandio, J., & Manic, M. (2020). Trustworthy AI
development guidelines for human system interaction. In Proceedings of the
International Conference on Human System Interaction (pp. 130-136). IEEE.

Williams, M. D. (2021). Social commerce and the mobile platform: Payment and security
perceptions of potential users. Computers in Human behavior, 115, 105557.

Wu, Y., Mou, Y., Li, Z., & Xu, K. (2020). Investigating American and Chinese subjects’
explicit and implicit perceptions of AI-generated artistic work. Computers in Human
Behavior, 104, 106186.

Xu, D., Huang, W. W., Wang, H., and Heales, J. (2014). Enhancing E-Learning Effectiveness
Using an Intelligent Agent-Supported Personalized Virtual Learning Environment: An
Empirical Investigation. Information & Management, 51(4), 430-440.
Zhang, L., Pentina, I., & Fan, Y. (2021). Who do you choose? Comparing perceptions of
human vs robo-advisor in the context of financial services. Journal of Services
Marketing, 35(5), 634-646.

Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous
robots threaten human identity, uniqueness, safety, and resources. International
Journal of Human-Computer Studies, 100, 48-54.
Appendix A
Table A1: Item loadings and cross loadings for high-risk condition.
(1) (2) (3) (4) (5)
Investment Behavioral Attitude Trust Perceived
self-efficacy intention to toward in AI accuracy
accept AI-based AI of AI
Constructs Items recommendation
Item 1 0.88 0.46 0.48 0.36 0.39
(1) Item 2 0.82 0.48 0.57 0.40 0.41
Item 3 0.88 0.51 0.45 0.27 0.35
Item 1 0.44 0.94 0.70 0.63 0.65
(2) Item 2 0.46 0.97 0.72 0.68 0.65
Item 3 0.38 0.97 0.73 0.64 0.63
Item 1 0.54 0.71 0.94 0.69 0.72
(3) Item 2 0.59 0.72 0.95 0.71 0.70
Item 3 0.51 0.70 0.94 0.72 0.67
Item 1 0.49 0.67 0.77 0.84 0.72
(4) Item 2 0.30 0.54 0.59 0.88 0.51
Item 3 0.17 0.49 0.50 0.82 0.50
Item 1 0.42 0.66 0.68 0.67 0.91
(5) Item 2 0.32 0.50 0.53 0.53 0.83
Item 3 0.44 0.61 0.72 0.64 0.92
Note. The bolded values indicate the loading of each item to a construct in the respective
columns. The other values indicate the cross loadings.

Table A2: Item loadings and cross loadings for low-risk condition.
(1) (2) (3) (4) (5)
Investment Behavioral Attitude Trust Perceived
self-efficacy intention to toward in AI accuracy
accept AI-based AI of AI
Constructs Items recommendation
Item 1 0.89 0.12 0.37 0.26 0.23
(1) Item 2 0.87 0.12 0.43 0.25 0.21
Item 3 0.89 0.12 0.29 0.15 0.18
Item 1 0.12 0.93 0.27 0.08 0.06
(2) Item 2 0.13 0.96 0.31 0.12 0.10
Item 3 0.13 0.93 0.27 0.14 0.11
Item 1 0.38 0.31 0.93 0.59 0.61
(3) Item 2 0.39 0.26 0.93 0.66 0.62
Item 3 0.37 0.28 0.92 0.64 0.60
Item 1 0.30 0.16 0.67 0.89 0.73
(4) Item 2 0.28 0.11 0.65 0.90 0.72
Item 3 0.17 0.06 0.26 0.72 0.47
Item 1 0.23 0.14 0.62 0.75 0.94
(5) Item 2 0.14 0.06 0.55 0.66 0.93
Item 3 0.27 0.05 0.65 0.74 0.87
Note. The bolded values indicate the loading of each item to a construct in the respective
columns. The other values indicate the cross loadings.

You might also like