Perceptionsand Acceptanceof Artificial Intelligence
Perceptionsand Acceptanceof Artificial Intelligence
net/publication/373718135
CITATIONS READS
27 1,596
1 author:
Michael Gerlich
SBS Swiss Business School
22 PUBLICATIONS 131 CITATIONS
SEE PROFILE
All content following this page was uploaded by Michael Gerlich on 07 September 2023.
Department of Management, SBS Swiss Business School, 8302 Kloten, Zurich, Switzerland;
[email protected]
Abstract: In this comprehensive study, insights from 1389 scholars across the US, UK, Germany, and
Switzerland shed light on the multifaceted perceptions of artificial intelligence (AI). AI’s burgeoning
integration into everyday life promises enhanced efficiency and innovation. The Trustworthy AI
principles by the European Commission, emphasising data safeguarding, security, and judicious
governance, serve as the linchpin for AI’s widespread acceptance. A correlation emerged between
societal interpretations of AI’s impact and elements like trustworthiness, associated risks, and
usage/acceptance. Those discerning AI’s threats often view its prospective outcomes pessimistically,
while proponents recognise its transformative potential. These inclinations resonate with trust and
AI’s perceived singularity. Consequently, factors such as trust, application breadth, and perceived
vulnerabilities shape public consensus, depicting AI as humanity’s boon or bane. The study also
accentuates the public’s divergent views on AI’s evolution, underlining the malleability of opinions
amidst polarising narratives.
1. Introduction
All of the areas of our life have the potential to become revolutionised by artificial
intelligence (AI). The general public’s opinion on and acceptance of AI, however, is still
largely unknown. Modern artificial intelligence has its origins in Alan Turing’s test of
Citation: Gerlich, Michael. 2023. machine intelligence in 1950, and the phrase was first used by a professor at Dartmouth
Perceptions and Acceptance of College in 1956. Today, the phrase refers to a wide variety of technology, ideas, and
Artificial Intelligence: A applications. In this research work, the word “AI” is used to describe a collection of
Multi-Dimensional Study. Social computer science methods that allow systems to carry out operations that would typically
Sciences 12: 502. https://doi.org/ require human intellect, such as speech recognition, visual perception, decision making,
10.3390/socsci12090502
and language translation. All social systems, including economics, politics, science, and
Received: 12 August 2023 education, have been impacted recently by the rapid advancements in artificial intelligence
Revised: 1 September 2023 (AI) technology (Luan et al. 2020). A total of 85% of Americans utilised at least one AI-
Accepted: 4 September 2023 powered tool, according to Reinhart (2018). People frequently do not recognise the existence
Published: 7 September 2023 of AI applications, nevertheless (Tai 2020). Artificial intelligence is applied in practically
every aspect of life because of the quick advancement of cybernetic technology. However,
some of them are still viewed as future, even sci-fi, technologies that are disassociated from
the reality of existence.
Copyright: © 2023 by the author.
According to Gansser and Reich (2021), AI is just a technology that was created
Licensee MDPI, Basel, Switzerland.
to improve human existence and assist individuals in specific situations. According to
This article is an open access article
Darko et al. (2020), AI is the primary technological advancement of the Fourth Industrial
distributed under the terms and
Revolution (Industry 4.0). AI is employed for many good purposes, such as sickness
conditions of the Creative Commons
diagnosis, resource preservation, disaster prediction, educational advancement, crime
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
prevention, and risk reduction at work (Brooks 2019). According to Hartwig (2021), AI will
4.0/).
increase productivity, open up new options, lessen human mistakes, take on the burden
Research Contribution
To be competitive, organisations must keep up with technological changes, especially
in this quickly evolving digital era when e-commerce, mobile technology, and the Internet
of Things are gaining popularity. Businesses must adopt technological breakthroughs
like artificial intelligence, but it may be even more crucial to fully comprehend these
approaches and their effects to implement them with the highest accuracy and precision.
To improve the match between AI applications and consumer demands, businesses must
comprehend both the technological and behavioural elements of their clients. All of
this needs to be considered to possibly enhance the overall impact at different stages
during the adoption process, eventually increasing user confidence and resolving peoples’
problems. This survey intends to shed light on how people throughout the world feel
about artificial intelligence. The results are anticipated to improve knowledge of the
variables influencing public acceptability of AI and their potential effects on adoption
and dissemination. The study could potentially spot possible obstacles to the general
deployment of AI and offer solutions.
The current scholarly landscape reveals a conspicuous paucity of empirical studies
centred on the general public’s perceptions of artificial intelligence, thereby offering a fertile
ground for additional inquiry (Pillai and Sivathanu 2020). Gerlich’s (2023) study unearthed
intriguing patterns of consumer behaviour, particularly when the majority of participants
Soc. Sci. 2023, 12, 502 3 of 24
4. Societal issues and frustrations: this variable explores how societal issues such as
politics, climate change, and social inequality may impact people’s attitudes towards
AI, as well as how they perceive AI’s potential to address these issues.
5. Demographic factors: this variable explores how factors such as age, gender, education
level, and income may impact people’s attitudes towards AI.
6. Exposure to AI: this variable explores how people’s level of exposure to AI, including
their use of AI-powered products and services, may impact their attitudes towards AI.
7. Cultural factors: the impact of cultural factors on the usage and adoption of AI
in society.
Demographics:
• Country;
• Age;
• Gender;
• Education;
• Income;
Perceived benefits of AI;
• Increases efficiency and accuracy;
• Offers convenience and saves time;
• Improves decision-making processes;
• Helps solve complex problems;
• Leads to cost savings;
• Creates new job opportunities.
Perceived Risks of AI:
• Leads to job displacement;
• Violates privacy concerns;
• Used for malicious purposes;
• Causes errors and mistakes;
• Perpetuates bias and discrimination;
• Unintended consequences.
Trust in AI:
• Performs tasks accurately.
• Makes reliable decisions.
• Is predictable.
• Confidence in AI’s ability to learn.
• Keeps my personal data secure.
• Used ethically.
• Less or no personal interests compared to humans.
Governmental/societal issues:
• The government does not solve important issues like climate change.
• AI can help address societal issues such as climate change and social inequality.
• Governments cannot solve global issues.
• AI has the potential to solve global issues.
• Politicians and countries have too many vested interests.
• AI has the potential to make society more equitable.
• AI can help create solutions to societal issues.
Usage of/exposure to AI:
• Use AI-powered products and services frequently.
• Basic understanding of what AI is and how it works.
• Zero experience with AI.
• Comfortable using AI-powered products and services.
• Encountered issues with AI-powered products and services in the past.
Soc. Sci. 2023, 12, 502 7 of 24
3. Literature Review
The idea of how people feel about artificial intelligence has recently come into focus
and grown in significance. The perceptions of AI and the variables influencing them are
of increasing interest (Schepman and Rodway 2022). Neudert et al. (2020) showed that
many people worry about the hazards of utilising AI after conducting extensive studies
including 142 nations and 154,195 participants. Similarly to this, Zhang and Dafoe (2019)
conducted a survey involving 2000 American individuals and discovered that a sizable
fraction of the participants (41%) favoured the development of AI while another 22% were
opposed to it. The majority of people have a favourable attitude towards robots and AI,
according to a large-scale study that included 27,901 individuals from several European
nations (European Commission & Directorate-General for Communications Networks,
Content & Technology 2017). It was also emphasised that attitudes are mostly a function
of knowledge: higher levels of education and Internet usage were linked to attitudes that
were more favourable towards AI. Additionally, it was shown that individuals who were
younger and male had more favourable views about AI than participants who were female
and older. While numerous demographic determinants of AI views have been identified in
earlier studies, there is still a significant need to explore them in many cultural contexts.
The 2023 study of Gerlich on AI-run influencers (virtual influencers) showed that the
study participants showed more trust and comfort with an AI than human influencers.
Therefore, there may be a variety of factors influencing a person’s propensity to adopt AI
in particular application areas. In a thorough study, Park and Woo (2022) discovered that
the adoption of AI-powered applications was predicted by personality traits; psychological
factors like inner motivation, self-efficacy, voluntariness, and performance expectation;
and technological factors like perceived practicality, perceived ease of use, technology
complexity, and relative advantage. It was also discovered that facilitating factors, such
as user experience and cost; factors related to personal values, such as optimism about
science and technology, anthropocentrism, and ideology; and factors regarding risk per-
ception, such as perceived risk, perceived benefit, positive views of technology, and trust
in government, were significantly associated with the acceptance of smart information
technologies. Additionally, subjective norms, culture, technological efficiency, perceived
job loss, confidence, and hedonic variables all have an impact on people’s adoption of AI
technologies (Kaya et al. 2022). The results of another study including 6054 individuals in
the US, Australia, Canada, Germany, and the UK showed that people’s confidence in AI is
low and that trust is crucial for AI acceptance (Gillespie et al. 2021).
The Special Eurobarometer, which was conducted in 2017, investigated not only how
digital technology affects society, the economy, and quality of life but also how the general
public feels about artificial intelligence (AI), robots, and their ability to do a variety of
activities. Overall, 61% of respondents from Europe were enthusiastic about AI and robots,
whereas 30% were disapproving. Additionally, it was claimed that exposure to robots
Soc. Sci. 2023, 12, 502 8 of 24
system usage. According to Brandtzaeg and Følstad (2017), a key motivator for AI (in the
form of chatbot) usage intention is the expectation of a chatbot’s productivity, which is
defined as how well it helps users gain important information in a short period and almost
at any moment.
The term “effort expectancy” (EE) describes how simple or easy it is to utilise AI
systems or apps. The perceived ease of use from the technology acceptance model (TAM)
and the complexity of the system to comprehend and use are among the components of
effort expectation. According to Davis (1989), perceived ease of use refers to how little effort
a person thinks utilising a certain system would need. As a result, if one programme is
thought to be more user-friendly than another, it is more likely to be adopted. The perceived
ease of use not only directly affects usage intentions but also has an effect on perceived
usefulness since, when all other factors are held constant, the perceived usefulness of a
system increases with perceived ease of use.
According to Venkatesh et al. (2003), social influence (SI) is the extent to which a
person’s close friends or family members think he or she should adopt AI systems or
applications. In the Theory of Planned Behaviour (Ajzen 2005), social influence is modelled
as subjective norms and functions as a direct predictor of usage intention. Subjective norms
are determined by an individual’s perception of whether peers, significant persons, or major
collective groupings believe that a specific behaviour should be carried out. The motivation
to comply, which weighs the social pressures one feels to conform their behaviour to
others’ expectations, and normative views, which relate to how significant others see or
approve of the performed behaviour, decide it (Hale et al. 2003). According to the study by
Gursoy et al. (2019), consumers consider their friends and family to be the most valuable
sources of information when making decisions. Users have been seen to absorb the customs
and ideas of their social groups and behave accordingly, particularly with regard to their
intentions for using technological applications like online services.
The term “facilitating conditions” (FC) describes how much a person thinks they
have the tools or organisational backing to make using AI applications easier. According
to the Theory of Planned Behaviours, the perception of perceived behavioural control is
likewise a facilitating condition (Ajzen 2005). People who perceive themselves to have
behavioural control believe they have the skills and resources needed to carry out the
behaviour. The greater the capacity and resources people feel themselves to possess, the
greater the influence that perceived behavioural control has on behaviour. Two effects
of perceived behaviour control are also supported by the Theory of Planned Behaviours.
It may affect someone’s intention to act in a particular way. Our intention to behave
is greater the more control we believe we have over our behaviour. Ajzen (2005) also
raised the idea of perceived behaviour. Further, Venkatesh et al. (2003) examined the
moderating impacts of the variables of gender, age, experience, and voluntarism use and
discovered an improvement in the testing model’s predictive power when moderating
factors were included. Age, for example, was shown to mitigate all of the interactions
between behavioural intentions and determinants (Venkatesh et al. 2003).
Additionally, Venkatesh et al. (2003) investigated the three variables of attitude, self-
efficacy, and anxiety and came to the conclusion that they do not directly affect behavioural
intention. The four primary UATUT components and these external factors are still being
looked at by numerous academics in their models. Since its inception, UTAUT has gained
popularity as a paradigm for several analyses of the behaviour of technology adoption. In
addition to re-examining the original models, researchers also take into account theoretical
and fresh external factors. Williams et al. (2015) found that perceptions of ease of use,
usefulness, attitude, perceived risk, gender, income, and experience have a significant
impact on behavioural intention, while perceptions of age, anxiety, and training have less
of an impact.
Depending on the particular study environment, social influence and facilitat-
ing conditions demonstrated their value in various ways. For instance, according to
Alshehri et al. (2012), facilitating conditions have weakly favourable effects on Saudi resi-
Soc. Sci. 2023, 12, 502 10 of 24
dents’ intentions to use and accept e-government services, whereas social influence has little
effect on such intentions. In contrast, social influence was found to be the most significant
and direct determinant of acceptance intention in a study by Jaradat and Al Rababaa (2013)
about the drivers of usage acceptance of mobile commerce in Jordan, whereas facilitating
conditions was suggested to have no significant effects on usage intention. Additionally, it
is advised that moderating variables be investigated because the moderating effects may
differ depending on the testing contexts.
and came to the conclusion that performance-related risks (privacy, financial, and time risk)
had a significant inhibitory effect on usage intentions for electric bill payment systems.
From the finished education standpoint (Figure 2), most of the respondents are edu-
cated to a bachelor’s level. This means that the respondents have a fair level of under-
standing of the subject and are educated to the level that they can give a fair and legitimate
opinion on the ma er that is not based on a random selection of responses owing to lack
of knowledge. These people are the building blocks of the nation and are also the major
Figure 1.1. Age
users of
Figure AgeInternet
the distribution ofthe
andof
distribution participants.
services that can be related to artificial intelligence.
participants.
From the finished education standpoint (Figure 2), most of the respondents are edu-
cated to a bachelor’s level. This means that the respondents have a fair level of under-
standing of the subject and are educated to the level that they can give a fair and legitimate
opinion on the ma er that is not based on a random selection of responses owing to lack
of knowledge. These people are the building blocks of the nation and are also the major
users of the Internet and the services that can be related to artificial intelligence.
Figure2.2. Finished
Figure Finishededucation
educationdistribution
distributionof
ofparticipants.
participants.
Figure3.3. Income
Figure Income distribution
distribution of
ofparticipants.
participants.
Table 1 shows
4.2. Regression Model below the mean values of the demography-related responses for all
countries and
A regressionthe total questionnaire.
test (Table Thisperformed
3) has been table further
as avalidates
next stepthetoassumption that the
check the validity
distribution
of the responsesof all respondents
captured underacross countries is almost
the perception-related the same,
questions. Theand no skewness
perception or
or the
spurious
future result
usage is anticipated
of AI because
tools is taken as theof any outliers
dependent in a certain
variable. category. Thevariables
The independent distribution
are
of the
the sample
basic populationcomfort
understanding, across nations on the aspects,
level, cultural parameters
and of age, gender,
beliefs education, and
of the respondents. An
income has
attempt is almost the same.
been made to check if these factors contribute to the adoption and acceptance
of AI.
TableThe
1. Mean valueshypotheses
proposed of the demography.
are shown in Table 2:
Country Age Gender Education Income
Table 2. Hypothesis structure.
CH 2.47 1.53 3.20 1.79
Hypothesis No. DE Hypothesis 2.28
Null 1.53 3.28Hypothesis
Alternate 1.73
UK of AI directly
Basic understanding 2.29
relates to 1.53Basic understanding3.28of AI does not relate
1.70to
1 perception that AIUSA 2.28 climatic, perception
is the solution to societal, 1.54 3.27
that AI is the solution to societal,1.71
climatic,
Grand Total
and global issues 2.31 1.53 and 3.26
global issues 1.72
Comfort of using AI directly relates to the perception Comfort of using AI does not relate to the perception
2 4.2.solution
that AI is the Regression Model climatic, and global
to societal, that AI is the solution to societal, climatic, and global
issues issues
A regression test (Table 3) has been performed as a next step to check the validity of
Cultural background
the responsesof people directly
captured relates
under thetoperception-related
the Cultural background of people
questions. does not relate
The perception or to thefu-
the
3 perception that AI is the solution to societal, climatic, perception that AI is the solution to societal, climatic,
ture usage of AI tools is taken as the dependent variable.
and global issues
The independent variables are
and global issues
the basic understanding, comfort level, cultural aspects, and beliefs of the respondents.
Cultural beliefs of people directly relate to the Cultural beliefs of people do not relate to the
An a empt has been made to check if these factors contribute to the adoption and ac-
4 perception that AI is the solution to societal, climatic, perception that AI is the solution to societal, climatic,
ceptance andofglobal
AI. issues and global issues
The proposed hypotheses are shown in Table 2:
People believe that AI is the future, and, hence, there People do not believe that AI is the future, and,
5 is a perception that AI is the solution to societal, hence, there is a perception that AI is the solution to
Table 2. Hypothesis structure.
climatic, and global issues societal, climatic, and global issues
Hypothesis No. Null Hypothesis Alternate Hypothesis
Basic
The understanding
average of AI
score for the directly about
questions relatesthe
Basic
AI understanding
being the futureofofAImankind
does notisrelate
3.98,
1 which shows a positive inclination. Respondents feel that AI is going to besolution
to perception that AI is the solution to so- to perception that AI is the the futureto so-
of
life. But iscietal, climatic, and
this perception global issues
supported by the underlyingcietal, climatic,That
reasons? andwasglobal issues
tested by the
Comfort
regression model,of using
and itAI
wasdirectly
found relates
that thetocoefficients
the Comfort of of using
these AI does
factors werenot
notrelate to the
significant
2 perception
(p-value was >0.1,that
90%AI is the solution
confidence level).toItsocie- perceptionthat
was interesting thateach
AI is the solution
country to socie-
has a different
combination. tal,While
climatic, and globalin
respondents issues tal, climatic,
all countries expressed theirand globalwith
comfort issuesAI and
Soc. Sci. 2023, 12, 502 14 of 24
admitted to believing that AI is the future technology to solve their problems and ease
their pain points, the underlying factors gave a different picture. None of the factors had
coefficients significant at a 95% confidence level, but when checked at 90% confidence,
the overall picture was a bit better. Respondents in the UK and the US showed a basic
understanding of AI. For other countries, the coefficient of basic understanding was not
significant. All respondents refuted that cultural background has any impact on the
adoption and future use of AI. This finding is not in line with the past studies discussed in
the literature review section. However, Swiss nationals responded that AI should make
sure that it is in line with their cultural beliefs for the people to trust AI in the future. So,
with regression as well, we obtained mixed responses, and, thus, there is a need to include
more parameters in the study to check on the usage of AI in the future. This paves the path
for SEM.
Paratmeter CH DE UK US Total
Multiple R 0.922 0.929 0.912 0.912 0.838
R Square 0.850 0.863 0.833 0.831 0.837
Intercept 0.547 0.803 0.825 0.900 0.484
Basic Understanding 0.358 0.575 0.092 0.093 0.267
Comfort with AI 0.000 0.000 0.000 0.000 0.000
p-value
Cultural Background 0.726 0.594 0.090 0.655 0.684
Cultural Belief 0.091 0.412 0.233 0.428 0.108
AI is Future 0.000 0.000 0.000 0.000 0.000
The final acceptance and rejection of the null hypothesis is shown in Table 4:
H0: The proposed model is a good fit, and multiple factors directly influence the acceptability and
usage of AI in the future.
H1: The proposed model is not a good fit, and multiple factors do not influence the acceptability and
usage of AI in the future.
Soc. Sci. 2023, 12, 502 15 of 24
Estimate SE 95% CI
Uses→Usages/acceptance −0.019 0.026 −0.07 0.038
Risks→Usages/acceptance −0.036 0.048 −0.13 0.055
Trust→Usages/acceptance 0.854 0.051 0.764 0.957
Issues→Usages/acceptance −0.083 0.033 −0.145 −0.015
Cultural→Usages/acceptance 0.165 0.026 0.111 0.214
least correlated factor is the cultural background and cultural beliefs, which would require
people to understand AI better. It will take them some time to understand the impacts of
AI on cultural backgrounds and beliefs, and the factor might start playing its role after that.
significant transformation in their viewpoints after being confronted solely with critical
arguments. Irrespective of the participants’ preliminary attitudes prior to the focus group
discussions, the ultimate consensus gravitated towards a unified perspective, significantly
influenced by either the commendatory or critical evidence provided during the discussion.
Incorporating the significant influence exerted by public intellectuals, the third stage
of the focus group process further revealed an intriguing trend. Upon introducing the
perspectives of globally recognised figures who have been vocal on matters of AI, a pro-
nounced shift in viewpoints was observed. This finding corroborates the literature on the
role of experts in shaping public opinion; a well-cited example of this influence is evident
in Collins and Evans’ “Rethinking Expertise” (Collins and Evans 2007). Such luminaries
act as “boundary figures,” straddling the line between specialised knowledge and public
discourse, and thus possess the capacity to reshape commonly held viewpoints. In the
focus groups, the inclusion of perspectives from noted experts such as Stephen Hawking
significantly expedited this process of attitudinal transformation, highlighting the pivotal
role that these figures can play in shaping public debates and ultimately influencing policy
(Collins and Evans 2007; Brossard 2013).
The revelations from the focus groups were aligned with the well-established theory of
‘agenda-setting’, wherein the mass media and opinion leaders like academics or renowned
experts play a pivotal role in moulding public opinion (McCombs and Shaw 1972). There-
fore, the significant impact noted when introducing comments from such experts is consis-
tent with existing scholarly contributions on the dynamics of public opinion.
Continuing, the implications of this study are particularly relevant for a scenario
wherein the hypothetical development of a superintelligence threatens humanity and
necessitates decisive action from governments. Supported by previous works such as
Bostrom’s “Superintelligence” (Bostrom 2014), the focus group findings indicate that public
opinion can be both malleable and influenced by authoritative figures, lending urgency
to the need for factual, well-reasoned information dissemination, especially in matters as
consequential as the governance of transformative technologies.
Finally, the focus group study illuminates the challenges governments may face in
adopting effective policies when public opinions are so susceptible to change. A divided
and easily manipulated public makes it difficult, if not impossible, for governments to
adopt coherent, long-term strategies, a concern supported by Zollmann’s study on public
opinion and policy adoption (Zollmann 2017). This susceptibility further underscores the
importance of equipping the general populace with a robust understanding of complex
issues, as this could serve as a foundational pillar for democratic governance in an age
increasingly influenced by advanced technologies.
To support the findings, a correlation between the average responses of each major
factor and the perception of people about AI (whether AI is good for mankind or will lead
to the end of mankind) was also performed (Table 8):
The results of the correlation are in line with the expectations. It is seen that people
who feel AI has risks associated have a negative correlation with all of the factors like
trust, uniqueness, and usage and do not believe AI to be the future of mankind. However,
such people feel that AI would lead to the end of mankind, hence resulting in a positive
correlation. In contrast, people who believe in the uses and usage of AI have positive
correlations with trust and uniqueness and believe that AI is the future of mankind. Thus, it
can be safely inferred that the factors of trust, usage, uniqueness, and risks associated with
AI are a good fit for people to frame a perception of AI being good or bad for mankind.
The study results presented offer substantive evidence and a nuanced understanding
to answer the research questions (R1, R2, and R3) posed at the beginning of the study.
For R1, concerning the public’s perception of AI in Western countries, the data imply a
direct and significant relationship between trust and the adoption or usage of AI technology.
The factor of trust substantially influences how the public perceives AI, corroborating earlier
works that emphasise the centrality of trust in technology adoption (Lewicki et al. 1998;
Mayer et al. 1995).
For R2, which asks what factors influence public attitudes towards AI, the results
indicate multiple factors, including trust, risks, and uses, playing vital roles. Importantly,
the factor of trust is observed to be the most crucial determinant in shaping public attitudes,
consistent with the existing literature on trust as a catalyst for technological adoption
(Vance et al. 2008). Risks and uses also have significant but negative path coefficients,
suggesting that they inversely impact AI adoption or usage. This finding aligns with the
technology acceptance model (TAM), which considers perceived ease of use and perceived
usefulness as key variables (Davis 1989).
As for R3, regarding the potential consequences of public attitudes for the adoption
and diffusion of AI, the results lend themselves to two primary interpretations. First, high
levels of trust can serve as a facilitator for broader AI adoption, a notion that has been
discussed in the context of other emergent technologies (Venkatesh and Davis 2000). Second,
the negative coefficients for risks and uses indicate that misconceptions or apprehensions
about AI can serve as barriers to its adoption, aligning with Rogers’ Diffusion of Innovations
theory, which posits perceived risks as impediments to the adoption of new technologies
(Rogers 2003).
The model fit indices, such as FIT, SRMR, and GFI, provide statistical validation for the
proposed model. They meet the general rules of thumb for an acceptable fit as delineated
by Cho et al. (2020). Therefore, the model appears to be robust and reliably explains the
variance in the variables, thereby bolstering the study’s findings.
Moreover, the correlations between major factors and the public’s perception of AI’s im-
pact on mankind are particularly illuminating. The results manifest that those who perceive
AI as a risk tend to be sceptical about its benefits for mankind, supporting the previous find-
ings about the impact of perceived risks on technological adoption (Venkatesh et al. 2012).
5. Discussion Summary
The salient finding that a mean score of 3.98 exists among respondents when evaluating
AI as the future of mankind warrants an in-depth discussion. While the result superficially
corresponds with Fountaine et al.’s (2019) observations about the organisational acceptance
of AI, the implication transcends mere acceptance and signifies a broader cultural embrace.
This delineation is crucial, particularly when situated against Brynjolfsson and McAfee’s
(2014) cautionary narrative about the ‘AI expectation-reality gap.’ By critically analysing
this juxtaposition, this research not only confirms but also challenges established paradigms
about societal optimism surrounding AI. Here, the study adds a layer of complexity by
demonstrating that this societal optimism persists despite a lack of statistically significant
predictors in the data.
Equally compelling is the discovery of geographical variations in attitudes towards
AI, which raises nuanced questions about the role of cultural conditioning in technology
adoption. Hofstede’s (2001) seminal work on cultural dimensions is pertinent here. The
Soc. Sci. 2023, 12, 502 19 of 24
divergence across countries in our data adds empirical richness to Hofstede’s theory,
expanding it from traditional technologies to nascent fields like AI. This effectively opens
the door for interdisciplinary inquiries that can explore the interaction between technology
and culture through varying analytical frameworks, perhaps even challenging some of
Hofstede’s dimensions in the context of rapidly evolving technologies.
Furthermore, the ethical considerations highlighted in this study provide an empirical
dimension to the normative frameworks proposed by the European Commission (2019)
and OECD (2019). While the existing literature stresses the conceptual importance of
ethics in AI (European Commission 2019; OECD 2019), the empirical data from this study
enriches this dialogue by pinpointing specific ethical attributes that currently hold societal
importance. This could have implications for policymakers and industry stakeholders, as it
elevates abstract ethical principles into empirically supported priorities.
Lastly, the use of SEM in this study finds validation in the methodological literature,
particularly the works of Hair et al. (2016). Our results strengthen their assertion that SEM
can effectively untangle complex interrelations in social science research. However, while
Hair et al. (2016) primarily address SEM’s robustness as a methodology, our study takes
this a step further by questioning the adequacy of existing models in explaining societal
attitudes towards complex and continually evolving technologies like AI.
6. Conclusions
In the realm of artificial intelligence (AI), it is intriguing to note that people predomi-
nantly regard the benefits to processes—such as greater innovation and productivity—as
outweighing what AI can offer to human outcomes, including enhanced decision-making
capabilities and potential. This inclination aligns with Reckwitz’s (2020) observations that
society is gradually moving from a mindset focused on rationalisation and standardisation
towards one that values the exceptional and unique. In this transformed landscape, technol-
ogy, including AI, plays a central role. The increasing prominence of quantifiable metrics,
including rankings and scores, bolsters this shift. These metrics, enabled by the continually
expanding capabilities of data processing in our digital era, serve to measure and define
the exceptional (Reckwitz 2020). Johannessen (2020) reminds us that the adoption of these
new technologies requires humanity to develop new skill sets, especially those that allow
for ethical reflection on the complexities introduced by such technological advancements.
This is not just an academic exercise; it is a societal imperative.
As our data demonstrate, trust and practical applications of AI are significant factors
shaping public opinion and engagement with the technology. A unanimous sentiment
emanating from the survey is the perceived ineffectiveness of governmental institutions in
tackling global crises such as climate change and COVID-19, often attributed to the vested
interests of politicians. Consequently, AI is increasingly viewed as a reliable and unbiased
tool for decision making at a high level, particularly for addressing complex global issues.
This account, then, does more than simply sum up the findings; it situates them within
broader scholarly and societal discussions about the evolving role of AI, pointing towards
the multifaceted implications of our collective trust in, and adoption of, this technology.
The role of governmental institutions in regulating the dissemination of information
has come to the forefront, especially at a time when the integrity of information is pivotal.
The findings of this research underscore that we are at a critical juncture where digital
platforms can easily amplify misinformation, thereby affecting public understanding and
fuelling polarisation. This situation calls for enhanced governance to set precedents con-
cerning what information should be shared and in what context. The public discourse
is presently characterised by polarisation, fragmenting society into opposing camps sus-
ceptible to manipulation and sudden shifts in opinion. The implications of this research
extend to scenarios where the unchecked development of superintelligence poses a po-
tential threat to humanity. In such a context, governmental action, often influenced by
the will of the voting public, becomes imperative to regulate and govern the trajectory
of artificial intelligence to ensure it aligns with the best interests of human society. The
Soc. Sci. 2023, 12, 502 20 of 24
Appendix A
Questionnaire of the study.
(1) What is your age?
18–24
25–34
35–44
45–54
55–64
Above 64
(2) What is your gender?
Female
Male
Other (specify)
Soc. Sci. 2023, 12, 502 21 of 24
(38) I am comfortable using AI-powered products and services. (1 = I do not agree at all,
6 = I fully agree)
(39) I have encountered issues with AI-powered products and services in the past. (1 = I
do not agree at all, 6 = I fully agree)
(40) My cultural background influences my attitudes towards AI. (1 = I do not agree at all,
6 = I fully agree)
(41) Different cultures may have different perceptions of AI. (1 = I do not agree at all, 6 = I
fully agree)
(42) AI development should take cultural differences into account. (1 = I do not agree at
all, 6 = I fully agree)
(43) My cultural beliefs impact my level of trust in AI. (1 = I do not agree at all, 6 = I
fully agree)
(44) Cultural diversity can bring unique perspectives to the development and use of AI.
(1 = I do not agree at all, 6 = I fully agree)
(45) I believe AI is the future for human kind (1 = I do not agree at all, 6 = I fully agree)
(46) I believe AI is the end of human kind. (1 = I do not agree at all, 6 = I fully agree)
(47) How optimistic or pessimistic are you about the future of AI and its impact on society?
References
Acemoglu, Daron, and Pascual Restrepo. 2017. Robots and Jobs: Evidence from US Labor Markets. VoxEU. Available online:
https://voxeu.org/article/robots-and-jobs-evidence-us (accessed on 28 April 2023).
Ajzen, Icek. 2005. Attitudes, Personality, and Behavior, 2nd ed. New York: Open University Press.
Alshehri, Mohammed, Steve Drew, and Rayed Alghamdi. 2012. Analysis of citizens’ acceptance for e-government services: Applying
the UTAUT model. Paper presented at the IADIS International Conferences Theory and Practice in Modern Computing and
Internet Applications and Research, Lisbon, Portugal, July 17–23.
Arrow, Kenneth J., Kamran Bilir, and Alan Sorensen. 2017. The impact of information technology on the diffusion of new pharmaceuti-
cals. American Economic Journal: Applied Economics 12: 1–39.
Asch, Solomon E. 1955. Opinions and social pressure. Scientific American 193: 31–35. [CrossRef]
Borges, Alex F. S., Fernando J. B. Laurindo, M. S. Mauro, R. F. Gonçalves, and C. A. Mattos. 2020. The strategic use of artificial
intelligence in the digital era: Systematic literature review and future research directions. International Journal of Information
Management 57: 102225.
Bossman, Jeremy. 2016. Top 9 Ethical Issues in Artificial Intelligence. World Economic Forum. Available online: https://www.weforum.
org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/ (accessed on 15 March 2023).
Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.
Brandtzaeg, Petter B., and Asbjørn Følstad. 2017. Why people use chatbots. Paper presented at the Internet Science: 4th International
Conference, INSCI 2017, Thessaloniki, Greece, November 22–24.
Brooks, Amanda. 2019. The Benefits of AI: 6 Societal Advantages of Automation. Rasmussen University. Available online: https:
//www.rasmussen.edu/degrees/technology/blog/benefits-of-ai/ (accessed on 15 March 2023).
Brossard, Dominique. 2013. New Media Landscapes and the Science Information Consumer. Proceedings of the National Academy of
Sciences of the United States of America 110: 14096–101. [CrossRef] [PubMed]
Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies.
New York: W. W. Norton & Company.
Cho, Eunho, Heungsun Hwang, Marko Sarstedt, and Christian M. Ringle. 2020. Partial least squares structural equation modeling:
Current state, challenges, and future research directions. Journal of Marketing Analytics 8: 1–22.
Cialdini, Robert B. 1984. Influence: The Psychology of Persuasion. New York: Harper Collins.
Circiumaru, Adrian. 2022. Futureproofing EU Law the Case of Algorithmic Discrimination. Oxford: Oxford University.
Cohen, Jacob. 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Mahwah: Lawrence Erlbaum Associates.
Collins, Harry, and Robert Evans. 2007. Rethinking Expertise. Chicago: University of Chicago Press.
Creswell, John W., and J. David Creswell. 2017. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 5th ed.
Thousand Oaks: Sage Publications.
Creswell, John W., and Timothy C. Guetterman. 2019. Educational Research: Planning, Conducting, Evaluating. Boston: Pearson.
Darko, Amos, Albert P. C. Chan, Mubarak A. Adabre, David J. Edwards, M. Reza Hosseini, and Emmanuel E. Ameyaw. 2020. Artificial
intelligence in the AEC industry: Scientometric analysis and visualisation of research activities. Automation in Construction
112: 103081.
Davis, Fred D. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly
13: 319–40. [CrossRef]
Soc. Sci. 2023, 12, 502 23 of 24
Davis, Fred D., Richard P. Bagozzi, and Paul R. Warshaw. 1989. User acceptance of computer technology: A comparison of two
theoretical models. Management Science 35: 982–1003. [CrossRef]
Denzin, Norman K., and Yvonna S. Lincoln. 2005. The Sage Handbook of Qualitative Research, 3rd ed. Thousand Oaks: Sage Publications.
Eurobarometer. 2017. Attitudes towards the Impact of Digitisation and Automation on Daily Life. Available online: https://europa.
eu/eurobarometer/surveys/detail/2160 (accessed on 22 April 2023).
European Commission. 2019. Ethical Guidelines for Trustworthy AI. Brussels: European Commission.
European Commission & Directorate-General for Communications Networks, Content & Technology. 2017. Attitudes towards the
Impact of Digitisation and Automation on Daily Life: Report. European Commission. Available online: https://digital-strategy.
ec.europa.eu/en/news/attitudes-towards-impact-digitisation-and-automation-daily-life (accessed on 22 April 2023).
Featherman, Mauricio S., and Paul A. Pavlou. 2003. Predicting E-Services Adoption: A Perceived Risk Facets Perspective. International
Journal of Human-Computer Studies 59: 451–74. [CrossRef]
Flick, Uwe. 2018. An Introduction to Qualitative Research, 6th ed. Thousand Oaks: Sage Publications.
Fountaine, Tim, Brian McCarthy, and Tamim Saleh. 2019. Building the AI-Powered Organisation. Harvard Business Review 97: 62–73.
Frey, Carl B., and Michael A. Osborne. 2017. The future of employment: How susceptible are jobs to computerisation? Technological
Forecasting and Social Change 114: 254–80. [CrossRef]
Gansser, Oliver A., and Christoph S. Reich. 2021. A new acceptance model for artificial intelligence with extensions to UTAUT2: An
empirical study in three segments of application. Technology in Society 65: 101535. [CrossRef]
Gerlich, Michael. 2023. The Power of Virtual Influencers: Impact on Consumer Behaviour and Attitudes in the Age of AI. Administrative
Sciences 13: 178. [CrossRef]
Gillespie, N., S. Lockey, and C. Curtis. 2021. Trust in Artificial Intelligence: A Five Country Study. Bisbane: The University of Queensland
and KPMG.
Greenbaum, T. L. 1998. The Handbook for Focus Group Research. Thousand Oaks: Sage Publications.
Guiso, Luigi, Samuele Papola, and Zingales Luigi. 2006. Does Culture Affect Economic Outcomes? Journal of Economic Perspectives
20: 23–48. [CrossRef]
Gursoy, Dogan, O. H. Chi, Lu Lu, and Robin Nunkoo. 2019. Consumers acceptance of artificially intelligent (AI) device use in service
delivery. International Journal of Information Management 49: 157–69. [CrossRef]
Hair, Joseph F., G. Tomas M. Hult, Christian M. Ringle, and Marko Sarstedt. 2016. A Primer on Partial Least Squares Structural Equation
Modeling (PLS-SEM). London: Sage Publications.
Hale, Julie, Brian Householder, and Kathryn Greene. 2003. The theory of reasoned action. The Persuasion Handbook: Developments in
Theory and Practice 14: 259–86.
Hartwig, Benjamin. 2021. Benefits of Artificial Intelligence. Hackr.io. Available online: https://hackr.io/blog/benefits-ofartificial-
intelligence (accessed on 17 April 2023).
Hofstede, Geert. 2001. Culture’s Consequences: Comparing Values, Behaviours, Institutions, and Organisations across Nations. London:
Sage Publications.
Huang, Ming-Hui, and Roland T. Rust. 2018. Artificial intelligence in service. Journal of Service Research 21: 155–72. [CrossRef]
Jaradat, Mohammed Riad M., and Mohammad S. Al Rababaa. 2013. Assessing Key Factor that Influence on the Acceptance of Mobile
Commerce Based on Modified UTAUT. International Journal of Business and Management 8: 102. [CrossRef]
Johannessen, Jon-Arild. 2020. Artificial Intelligence, Automation and the Future of Competence at Work, 1st ed. Oxfordshire: Routledge.
[CrossRef]
Kaplan, Andreas, and Michael Haenlein. 2019. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations,
and implications of artificial intelligence. Business Horizons 62: 15–25. [CrossRef]
Kaya, Feridun, Fatih Aydin, Astrid Schepman, Paul Rodway, Okan Yetişensoy, and Meva Demir Kaya. 2022. The Roles of Personality
Traits, AI Anxiety, and Demographic Factors in Attitudes toward Artificial Intelligence. International Journal of Human–Computer In-
teraction. [CrossRef]
Krueger, Richard A., and Mary Anne Casey. 2015. Focus Groups: A Practical Guide for Applied Research, 5th ed. Thousand Oaks: Sage
Publications.
Kvale, Steinar, and Svend Brinkmann. 2009. Interviews: Learning the Craft of Qualitative Research Interviewing, 2nd ed. Thousand Oaks:
Sage Publications.
Lai, Po-Ching. 2017. The Literature Review of Technology Adoption Models and Theories for the Novelty Technology. Journal of
Information Systems and Technology Management 14: 21–38. [CrossRef]
Lewicki, Roy J., Daniel J. McAllister, and Robert J. Bies. 1998. Trust and distrust: New relationships and realities. Academy of Management
Review 23: 438–58. [CrossRef]
Lichtenthaler, Ulrich. 2020. Extremes of acceptance: Employee attitudes toward artificial intelligence. The Journal of Business Strategy
41: 39–45. [CrossRef]
Longhurst, Robyn. 2016. Semi-Structured Interviews and Focus Groups. In Key Methods in Geography, 3rd ed. Edited by Nick Clifford,
Meghan Cope, Thomas Gillespie and Sarah French. London: Sage Publications, pp. 143–56.
Luan, Hao, Peter Geczy, Hui Lai, Janette Gobert, Stephen J. H. Yang, Hiroaki Ogata, Jeremy Baltes, Rafael Guerra, Peng Li, and
Chin-Chung Tsai. 2020. Challenges and future directions of big data and artificial intelligence in education. Frontiers in Psychology
11: 580820. [CrossRef] [PubMed]
Soc. Sci. 2023, 12, 502 24 of 24
Man Hong, Lai, Noor Chenawi, Wafi Zulkiffli, and Nur Hamsani. 2018. The Chronology of Perceived Risk. Paper presented at the 6th
International Seminar on Entrepreneurship and Business (ISEB 2018), Kota Bharu, Kelantan, November 24.
Maxwell, Joseph A. 2013. Qualitative Research Design: An Interactive Approach, 3rd ed. Thousand Oaks: Sage Publications.
Mayer, Roger C., James H. Davis, and F. David Schoorman. 1995. An Integrative Model of Organizational Trust. The Academy of
Management Review 20: 709–34. [CrossRef]
McCombs, Maxwell E., and Donald L. Shaw. 1972. The Agenda-Setting Function of Mass Media. Public Opinion Quarterly 36: 176–87.
[CrossRef]
McMillan, James H., and Sally Schumacher. 2014. Research in Education: Evidence-Based Inquiry. Boston: Pearson.
Meyers, Lawrence S., Glenn Gamst, and Anthony J. Guarino. 2013. Applied Multivariate Research: Design and Interpretation. Los Angeles:
Sage Publications, Inc.
Momani, Alaa M., and Mohammad M. Jamous. 2017. The Evolution of Technology Acceptance Theories. International Journal of
Contemporary Computer Research 1: 51–58.
Morgan, David. L. 1996. Focus Groups. Annual Review of Sociology 22: 129–52. [CrossRef]
Neudert, Lisa-Maria, Ansgar Knuutila, and Philip N. Howard. 2020. Global Attitudes Towards AI, Machine Learning & Automated Decision
Making. Working paper 2020.10. Oxford: University of Oxford.
OECD. 2019. Artificial Intelligence in Society. Organisation for Economic Co-Operation and Development Publishing. Available
online: https://www.oecd-ilibrary.org/sites/eedfee77-en/index.html?itemId=/content/publication/eedfee77-en (accessed on
12 April 2023).
Park, Jonghyuk, and Sang Eun Woo. 2022. Who likes artificial intelligence? Personality predictors of attitudes toward artificial
intelligence. The Journal of Psychology 156: 68–94. [CrossRef]
Pillai, Rajnandini, and Brijesh Sivathanu. 2020. Adoption of artificial intelligence (AI) for talent acquisition in IT/ITeS organizations.
Benchmarking: An International Journal 27: 2599–629. [CrossRef]
Reckwitz, A. 2020. The Society of Singularities. In Futures of the Study of Culture: Interdisciplinary Perspectives, Global Challenges. Edited
by D. Bachmann-Medick, J. Kugele and A. Nünning. Berlin and Boston: De Gruyter, pp. 141–54. [CrossRef]
Reinhart, Riley. J. 2018. Most Americans Already Using Artificial Intelligence Products. Gallup. Available online: https://news.gallup.
com/poll/228497/americans-already-using-artificial-intelligence-products.aspx (accessed on 12 April 2023).
Restuccia, Diego, and Richard Rogerson. 2017. The Causes and Costs of Misallocation. Journal of Economic Perspectives 31: 151–74.
[CrossRef]
Rogers, Everett M. 2003. Diffusion of Innovations, 5th ed. Hong Kong: Free Press.
Schepman, Astrid, and Paul Rodway. 2022. The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory
validation and associations with personality, corporate distrust, and general trust. International Journal of Human–Computer
Interaction 39: 2724–41. [CrossRef]
Sivathanu, Brijesh, and Rajnandini Pillai. 2019. Leveraging Technology for Talent Management: Foresight for Organizational
Performance. International Journal of Sociotechnology and Knowledge Development 11: 16–30. [CrossRef]
Tai, Michael C. T. 2020. The impact of artificial intelligence on human society and bioethics. Tzu Chi Medical Journal 32: 339–43.
[CrossRef] [PubMed]
Tubadji, Annie, and Peter Nijkamp. 2016. Six degrees of cultural diversity and R&D output efficiency: Cultural percolation of new
ideas. Letters in Spatial and Resource Sciences 9: 247–64.
Vance, Anthony, Christophe Elie-Dit-Cosaque, and Detmar W. Straub. 2008. Examining trust in information technology artifacts: The
effects of system quality and culture. Journal of Management Information Systems 24: 73–100. [CrossRef]
Venkatesh, Viswanath, and Fred D. Davis. 2000. A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal
Field Studies. Management Science 46: 186–204. [CrossRef]
Venkatesh, Viswanath, Michael G. Morris, Gordon B. Davis, and Fred D. Davis. 2003. User Acceptance of Information Technology:
Toward a Unified View. Management Information Systems Quarterly 2003: 425–78. [CrossRef]
Venkatesh, Viswanath, James Y. L. Thong, and Xin Xu. 2012. Consumer Acceptance and Use of Information Technology: Extending the
Unified Theory of Acceptance and Use of Technology. MIS Quarterly 36: 157–78. [CrossRef]
Verdier, Thierry, and Yves Zenou. 2017. The role of social networks in cultural assimilation. Journal of Urban Economics 97: 15–39.
[CrossRef]
Williams, Michael D., Nripendra P. Rana, and Yogesh K. Dwivedi. 2015. The unified theory of acceptance and use of technology
(UTAUT): A literature review. Journal of Enterprise Information Management 28: 443–88. [CrossRef]
Zhang, Baobao, and Allan Dafoe. 2019. Artificial Intelligence: American Attitudes and Trends; Oxford: Center for the Governance of AI,
Future of Humanity Institute, University of Oxford. [CrossRef]
Zollmann, Julia. 2017. Corporate media coverage of the role of corporations in global health. Critical Sociology 43: 709–28.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.