Impact-of-generative-artificial-intelligence-models-on-the-p_2024_Computers-
Impact-of-generative-artificial-intelligence-models-on-the-p_2024_Computers-
Computers in Industry
journal homepage: www.sciencedirect.com/journal/computers-in-industry
A R T I C L E I N F O A B S T R A C T
Keywords: Generative Artificial Intelligence (AI) models serve as powerful tools for organizations aiming to integrate
Generative AI models advanced data analysis and automation into their applications and services. Citizen data scientists—individuals
ChatGPT without formal training but skilled in data analysis—combine domain expertise with analytical skills, making
Citizen Data science
them invaluable assets in the retail sector. Generative AI models can further enhance their performance, offering
Retail firms
Industrial growth
a cost-effective alternative to hiring professional data scientists. However, it is unclear how AI models can
Industrial and innovation effectively contribute to this development and what challenges may arise. This study explores the impact of
generative AI models on citizen data scientists in retail firms. We investigate the strengths, weaknesses, op-
portunities, and threats of these models. Survey data from 268 retail companies is used to develop and validate a
new model. Findings highlight that misinformation, lack of explainability, biased content generation, and data
security and privacy concerns in generative AI models are major factors affecting citizen data scientists’ per-
formance. Practical implications suggest that generative AI can empower retail firms by enabling advanced data
science techniques and real-time decision-making. However, firms must address drawbacks and threats in
generative AI models through robust policies and collaboration between domain experts and AI developers.
1. Introduction growth from 2023 to 2030. This rapid expansion is predicted to result in
a significant market worth US$207.00 billion by 2030. On an interna-
Generative AI models, such as ChatGPT, are advanced natural lan- tional level, the US will emerge as the global market leader in terms of
guage processing (NLP) models that use artificial intelligence techniques size, with an anticipated US$16.14 billion in 2023, highlighting its vital
(Lund et al., 2023; Huang et al., 2023; Akter et al., 2023), particularly position in the evolution of the Generative AI business.
deep learning (LeCun et al., 2015), to understand and generate Generative AI models are revolutionizing decision-making across
human-like text-based responses in a conversational manner. These various industries. In marketing and customer service, they perform
models have significant applications in various fields due to their ability sentiment analysis to gauge public opinion (Adam et al., 2021). In
to process and generate human language effectively (Wong et al., 2023; chatbots, they streamline real-time interactions (Adamopoulou and
Wamba et al., 2023a). Focusing on the market size of these models and Moussiades, 2020). Finance sees a transformation as these models pro-
referring to (Statista, 2023a), the marketplace for Generative AI is ex- vide data-driven insights for investment strategies, risk assessment, and
pected to increase significantly in the following years. It is projected to market trends (Okuda and Shoda, 2018; Khan and Rabbani, 2021).
attain a market size of US$44.89 billion in 2023, with a 24.40 % yearly Healthcare gets a boost with AI diagnosing diseases and recommending
* Corresponding author.
E-mail addresses: [email protected] (R.A. Abumalloh), [email protected] (M. Nilashi), [email protected] (K.B. Ooi),
[email protected] (G.W.H. Tan), [email protected] (H.K. Chan).
https://doi.org/10.1016/j.compind.2024.104128
Received 22 February 2024; Received in revised form 23 June 2024; Accepted 7 July 2024
Available online 21 July 2024
0166-3615/© 2024 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
treatments based on patient data and medical literature (Xu et al., 2021; 2. Literature reviews
Palanica et al., 2019; Gala and Makaryus, 2023). In education, AI en-
hances personalized learning materials for better student experiences Data is indispensable in the service industry (Stylos et al., 2021;
(Fuchs, 2023; Kasneci et al., 2023). Generative AI models have been able Yallop and Seraphin, 2020; Line et al., 2020) as it underpins personal-
to analyze the feedback from online review platforms and social media, ized guest experiences, optimizes pricing and availability through de-
which helps in improving the quality of their business in the case of mand forecasting, enhances operational efficiency, fosters competitive
tourism and hospitality firms (Wong et al., 2023; Carvalho and Ivanov, advantage, and enables targeted marketing campaigns (Line et al.,
2023). The personalized content-based recommendation system based 2020). Additionally, it provides insights into customer behavior
on AI-based chatbots has helped tourists to choose suitable activities, (Yadegaridehkordi et al., 2020). By harnessing the power of data,
food, and accommodation. Moreover, many companies may introduce businesses in this industry can meet guest expectations and also drive
chatbots to quickly answer basic inquiries of their customers, which can efficiency, sustainability, and competitiveness, ensuring their long-term
decrease the pressure on human service providers and increase the success in a dynamic and customer-centric landscape (Yadegaridehkordi
overall satisfaction of customers (Orden-Mejia and Huertas, 2022). et al., 2020; Ahani et al., 2019). By using data-driven analytics, orga-
Across these fields, AI has proved to increase efficiency, profitability, nizations can optimize processes, identify potential areas for growth,
and customer satisfaction. improve customer experiences, and stay ahead of competitors (Vassakis
Citizen data scientists (Merkelbach et al., 2022; Lawrence, 2019) are et al., 2018). Organizations that effectively use data have a competitive
individuals who, without formal training, have developed the skills advantage in today’s business landscape, allowing them to navigate
needed to analyze data and draw valuable insights using various tools. rapidly changing conditions, succeed, and emerge as industry leaders
Their role is crucial for many industries because they enhance (Niu et al., 2021). According to the statistics (Statista, 2022), in 2021,
data-driven decision-making. Citizen data scientists help professionals the global big data analytics market boasted a valuation exceeding 240
extract actionable insights from underutilized data across various fields billion U.S. dollars. This dynamic sector is poised for substantial
(Dorsey, 2019). As they possess a unique blend of industry-specific expansion in the foreseeable future, with projections indicating a market
knowledge and data analysis capabilities, the presence of citizen data value exceeding 650 billion dollars by 2029.
scientists in retail firms holds particular significance. This context is The profusion of data poses both challenges and opportunities. Data
widely investigated in the previous research (Merkelbach et al., 2022; scientists, with their specialized training and expertise (Davenport and
Mullarkey et al., 2019; Alpar and Schulz, 2022). In this context, the Patil, 2012), play a critical role in this scenario (Song and Zhu, 2016).
emergence of generative AI models like ChatGPT proves to be incredibly They are responsible for collecting, cleaning, and analyzing data,
valuable in supporting the development of citizen data scientists. extracting meaningful insights that drive strategic decision-making.
Particularly, through the aid of these models, this development will be They employ advanced statistical and machine-learning techniques to
cost-efficient and time-effective. In fact, developing the capabilities of identify patterns, trends, and correlations within the data (Davenport
data scientists involves investments in retail organizations. Citizen data and Patil, 2012; Kim et al., 2016). Moreover, they develop predictive
scientists may present a promising alternative to the traditional methods models that aid demand forecasting, resource allocation, risk manage-
of hiring or training professional data scientists. However, it is unclear ment, etc. Overall, data scientists serve as the analytical backbone of the
how AI models can effectively contribute to this development and what industry, helping organizations navigate the complexities of a data-rich
challenges may arise. Thus, it is crucial to fully examine the potential environment. According to statistics (Statista, 2023b), companies are
strengths, weaknesses, opportunities, and threats associated with this intensifying their recruitment endeavors in various sectors to expand
adoption in the development of citizen data scientists in retail firms. In their data science teams. Between 2020 and 2021, there was a note-
addition, Generative AI models have primarily been assessed using worthy surge in the surveyed organizations with 50 or more data sci-
algorithmic-based evaluations (Lencastre et al., 2023), with limited entists on their payroll, with proportions rising from 30 percent to nearly
attention given to evaluating the influential factors and a diminished 60 percent. The size of data science teams within these organizations
focus on human-centered assessments of these factors. also witnessed significant growth, increasing from an average of 28 data
Therefore, in this study, we examine the role of generative AI models scientists to 50 (Statista, 2023b).
in the development of citizen data scientists within the retail sector Citizen data scientists have the potential to enhance the work of
using the SWOT (Strengths, Weaknesses, Opportunities, and Threats) professional data scientists in retail firms. Despite lacking formal data
analysis (Bull et al., 2016; Noguerol et al., 2019). We also conduct an science training, they often bring valuable domain-specific knowledge
empirical investigation and develop a model based on the identified and a practical understanding of industry intricacies (Gröger, 2018;
factors in SWOT analysis to evaluate how they impact the adoption of Zacarias et al., 2021). One of their strengths is the ability to analyze
generative AI models from the citizen data scientists’ perspectives. customer data and provide insights that can lead to more informed and
Overall, this research aims to address the following research questions: timely decisions. Citizen data scientists play an important role in the
retail industry by customizing marketing strategies based on customer
i. How can generative AI models empower and enhance the capa- behavior and preferences. Nowadays, a significant amount of data in the
bilities of citizen data scientists in the retail industry? retail industry is derived primarily from social media platforms.
ii. What are the specific strengths and weaknesses of adopting Therefore, the examination of these diverse data sources, which consist
generative AI models by citizen data scientists within the retail of both structured and unstructured data, can provide significant ben-
sector? efits to retail companies. In this context, citizen data scientists can help
iii. What are the opportunities and threats associated with adopting retail firms improve products and services by analyzing customer feed-
generative AI models in nurturing citizen data scientists within back and reviews. This is one approach for identifying specific areas for
the retail sector? product and service improvements. Citizen data scientists can help re-
tailers maintain a competitive advantage by identifying market oppor-
The rest of this study is as follows: related studies are reviewed in tunities and threats. This can be accomplished by analyzing market
Section 2. The proposed framework is presented in Section 3. Data trends and competitive data.
collection and analysis are examined in Sections 4 and 5, respectively. Citizen data scientists engage in collaborative efforts with profes-
The conclusion and research limitations are presented in Section 6. sional data scientists in order to effectively bridge the existing divide
between unprocessed data and practical insights that can be readily
utilized (Villanueva Zacarias et al., 2023). They are instrumental in
framing relevant questions and interpreting analytical results. Their role
2
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
is particularly crucial in ensuring that the insights derived from data is the basis of the research model of this work.
align with the industry’s unique challenges and objectives. Citizen data
scientists also promote data literacy in organizations. With adequate
3.1. Strengths
data science knowledge, they will help non-specialists understand and
use data in retail firms. Using these skills, citizen data scientists can help
3.1.1. Ease of use
democratize data-driven retail decision-making, allowing stakeholders
Although previous studies have examined ChatGPT from different
at all levels to shape the industry’s future.
angels (Han et al., 2024; Pursnani et al., 2023), ease of use is critically
Generative AI models can enrich the retail sector through several
important for the adoption and use of this system (Albayati, 2024). This
processes including but not limited to (1) store optimization and visual
variable is used in determining whether an individual will voluntarily
merchandising which include customer movement analysis and store
use an IS (Karahanna and Straub, 1999; Venkatesh and Davis, 1996).
layout optimization, (2) data analysis and insights generation that entail
Ease of use has been explored as an important driver of the adoption of
several functions like fraud detection, supply chain optimization
AI-based models in the literature. For instance, in the study by Albayati
(Wamba et al., 2023b), customer feedback analysis, (3) customer sup-
(Albayati, 2024), the authors indicate that ease of use is reflected in the
port and engagement (Baabdullah, 2024), which entails AI-driven
users’ capabilities to easily utilize the ChatGPT to accomplish the
chatbots, effective customer support, enhanced response time, (4)
required tasks and become skillful in using it. It also refers to the user’s
pricing strategies and dynamic pricing which include the deployment of
perception of how simple and easy-going the interaction with the
competitive pricing strategies, the deployment of dynamic pricing
ChatGPT is (Almulla, 2024). Building on prior work in other fields
strategies, and the implementation of profit optimization, (5) person-
(Venkatesh and Davis, 1996; Gefen and Keil, 1998), we propose that:
alized shopping experience which can be achieved by tailored recom-
mendation (Bellini et al., 2023) and content generation (Kumar et al., H1. : The ease of use of generative AI models positively impacts the
2023), and (6) inventory management which can be enhanced by opti- performance of citizen data scientists.
mized stock levels, improved demand forecasting, and minimized
overstock and understock situations. 3.1.2. Cost-efficiency
In retail firms, the inclusion of citizen data scientists holds para- Cost-efficiency refers to the ability of a system, process, or organi-
mount importance due to their distinct amalgamation of industry zation to achieve its objectives and desired outcomes while minimizing
expertise and proficiency in data analysis. Within this framework, the the consumption of resources, particularly financial resources
emergence of generative AI models such as ChatGPT stands out as a (Kangaspunta et al., 2012). Generative AI models, like any other tools,
pivotal approach for the development of citizen data scientists’ skills. have related financial costs. These might be framed as one-time pur-
Although previous research has widely investigated the role of citizen chases, usage-based price plans, or monthly payments. Users evaluate
data scientists from different angles in organizations, their contributions these costs to determine if the tool is cost-effective, considering whether
to the retail sector are fairly unexplored. In addition, it is not clear how the price is reasonable and within their budget. However, the
AI systems can assist citizen data scientists in improving the perfor- cost-effectiveness of ChatGPT arises from its capacity to offer accessible
mance of retail organizations through their capabilities. In general, solutions for data analysis, eliminating the requirement for specialized
evaluating the performance of citizen data scientists in retail settings knowledge or intricate infrastructure. Consequently, it becomes an
when using generative AI models can provide valuable insights into their affordable and accessible tool for individuals in the industry who aim to
effectiveness and areas for improvement. This, in turn, will help opti- utilize data to make well-informed decisions. This cost-effective strategy
mize data analytics practices and strategies within the retail industry. reduces data science entry barriers and maximizes return on investment
for organizations using data for operations and customer experiences.
3. Generative AI models and performance of citizen data Building on the above discussion, we propose that:
scientists
H2. : The cost-efficiency of generative AI models positively impacts the
performance of citizen data scientists.
Data and machine learning advancements in the digital age offer
organizations a unique opportunity to use data analytics (Kava et al.,
3.1.3. Continuous learning
2021). Data analytics and machine learning becoming increasingly
As stated in (Pianykh et al., 2020), in 1955, John McCarthy had a
important to retailing (Wang et al., 2021). A comprehensive role of
remarkable foresight when he said, "Probably a truly intelligent machine
machine learning analytics and metrics in retailing is presented in the
will carry out activities which may best be described as
survey by (Wang et al., 2021). The deployment of machine learning
self-improvement" (McCarthy et al., 2006). Continuous learning in an
requires highly skilled individuals in this field. However, due to the fact
iterative process helps in refining the system’s performance over time
that there are expenses involved in developing the capabilities of data
(Pianykh et al., 2020). On the basis of this definition, continuous
scientists, a shortage of such experts in many retail organizations is
learning in the context of generative AI models refers to the ongoing
apparent. Such costs could be associated with training, technology, time,
process of updating and improving these models to enhance their ca-
and investments in infrastructure. These aspects are essential for
pabilities. This process can be automated by inputting new data into the
equipping individuals with the necessary skills for data analysis and
model and enhancing its algorithms. Continuous learning is an essential
decision-making. The development of citizen data scientists is essential
factor in the performance of citizen data scientists using generative AI
to fill this gap in organizations. Citizen data scientists are individuals
models like ChatGPT in retail firms. Drawing upon the above discussion,
without formal data science training who possess the skills and tools to
we propose that:
analyze data and derive insights. They can reduce the need for highly
specialized data scientists and speed up decision-making by providing H3. : Continuous learning of generative AI- models positively impacts
timely insights. Although generative AI models, such as GPT-3 and the performance of citizen data scientists.
ChatGPT, can offer a user-friendly interface that simplifies complex data
analysis tasks, several aspects should be considered for their use in 3.1.4. Innovative insights
developing citizen data scientists in retail firms. In this section, we Innovativeness has been linked to the generative AI model’s usage,
investigate the role of generative AI models in the performance of citizen while also referring to people who are receptive to new technology and
data scientists and provide a SWOT (strengths, weaknesses, opportu- eager to use it (Bouteraa et al., 2024). The term innovative insights is
nities, and threats) analysis to shape the final model and hypotheses. The used to describe the unique and useful understandings or perspectives
factors in each category are presented and discussed. The SWOT analysis that can be gained through data analysis, research, or exploration, and
3
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
which in turn can inspire novel and forward-thinking solutions, strate- 3.2.3. Tendency to generate biased content
gies, or ideas. By leveraging innovative insights, generative AI models In general, the tendency to generate biased content refers to the
become potent tools for data-driven problem-solving, risk management, inclination of AI systems to produce text or responses that exhibit biases
and content optimization, ultimately driving value across a spectrum of based on the data they were trained on (Ntoutsi et al., 2020). Bias in
applications and industries. In fact, they provide individuals with a generative AI models is a concern because it can spread harmful biases.
competitive advantage by offering the capacity to uncover hidden pat- This issue has been widely examined in prior studies in the context of AI
terns, generate meaningful reports, and make data-driven decisions. systems (Fischer, 2023; Tortora, 2024; Srinivasan and Uchino, 2021). As
This not only accelerates individuals’ learning curve but also enhances per the research conducted by Currie, Hawk and Rohren (Currie et al.,
their problem-solving capabilities, ultimately fostering a new generation 2024), ChatGPT can reinforce the existing gender and ethnicity stereo-
of data-savvy professionals who can contribute effectively to industries types in business and society. This can be explained by several sources of
like retail firms. Accordingly, we propose that: biases, including outdated data, lack of contextual understanding, lack
of ethical judgment, and insufficient reliability, precision, and crea-
H4. : Innovative insights capabilities of generative AI models posi-
tivity. Biased content generated by AI models may limit the scope and
tively impact the performance of citizen data scientists.
quality of training and raise ethical concerns. Upon discussion by
(Schwartz et al., 2022), addressing bias is crucial in building user trust
3.2. Weaknesses and ensuring accurate information dissemination. In fact, dealing with
this issue in AI systems will mitigate the negative consequences associ-
3.2.1. Data security and privacy concerns ated with biased content. Drawn from previous research, in the context
Data security and privacy concerns in information systems refer to of generative AI models, it is inferred that the presence of bias may be a
the challenges associated with safeguarding sensitive data and protect- major barrier to their effective use in developing citizen data scientists.
ing individuals’ privacy within digital environments (Turn and Ware, Hence, we propose that:
1976; Saura et al., 2022). Security and privacy threats are considered
H7. : The performance of citizen data scientists is negatively impacted
weaknesses of generative AI models because they can inadvertently
by biased content in generative AI models.
generate responses that may expose sensitive or private information,
with the lack of control over the data shared during interactions, raising
3.2.4. Dependence on external resources
concerns about data privacy and security breaches. Although these
Data plays a key role in data-centric AI (Whang et al., 2023; Nilashi
models do not directly reveal personal information in response to in-
et al., 2023). Generative AI models rely heavily on external resources
quiries, inference tasks can potentially indicate that the model (e.g.
such as data preprocessing algorithms (Hassani and Silva, 2023). There
ChatGPT) stores and records such data, which threatens the privacy and
is no doubt that these models will experience reduced functionality
security of data (Lai et al., 2023). Accordingly, we propose that:
when external resources are unavailable or inaccessible. In many con-
H5. : Data security and privacy concerns of generative AI models texts in retail, such as online customer reviews (Floyd et al., 2014),
negatively impact the performance of citizen data scientists. real-time access to the data will play an important role in
decision-making and service improvement problems. As such, timely
3.2.2. Explainability and comprehensive access to resources within generative AI models is
Explainability in AI refers to the ability to understand and interpret essential for citizen data scientists’ development. Accordingly, we pro-
how an artificial intelligence system (Meske et al., 2022; Tchuente et al., pose that:
2024; Suhail et al., 2023), such as a machine learning model or under-
H8. : Dependence on external resources within the generative AI
lying algorithm, makes decisions or predictions, making the reasoning
models negatively impacts the performance of citizen data scientists.
behind its actions transparent and understandable to humans (Arrieta
et al., 2020). Most generative AI models do not offer full explainability
primarily due to their inherent complexity. These models, often based on 3.3. Opportunities
deep learning and neural networks, comprise millions or even billions of
parameters, making it challenging to trace every decision back to its 3.3.1. Integration with Machine Learning (ML) platforms
source. Additionally, achieving complete transparency can sometimes Integrating ML platforms involves incorporating generative AI
compromise performance. Balancing the need for explainability while models into ML frameworks and environments, enabling seamless
maintaining high levels of accuracy and efficiency is a complex collaboration between natural language understanding capabilities and
trade-off. Indeed, the limited explainability of many generative AI broader ML processes. ML can present robust performance that over-
models is considered a significant shortcoming in the field. comes the limitations of classical statistical approaches, while also
Drawing upon the research conducted by (Alalwan et al., 2017), posing the capability of handling data that is diverse in terms of formats
users’ perceptions of integrity, ability, benevolence, and reliability of and features (An et al., 2024). Generative AI models can serve as valu-
the technology reflect trust in technology. Explainability has been one of able components with the integration of ML platforms, enriching their
the indicators of users’ trust in technology (Abumalloh et al., 2020). It is capabilities and facilitating more efficient and insightful handling of text
found that there is a link between explanation, accountability, and data for various applications, including sentiment analysis, content
ethical usage (Lima et al., 2022). Particularly, this point has been raised classification, recommendation systems, and data-driven decision--
in the literature in different contexts. For instance, according to Del Ser, making. User interactions with ML platforms will become more intuitive
Barredo-Arrieta, Díaz-Rodríguez, Herrera, Saranti and Holzinger (Del and natural, allowing plain-language conversations with AI models to
Ser et al., 2024), counterfactual explanations could be utilized to configure ML experiments, obtain real-time insights, and receive
establish trust toward large language models. In another study (Lai et al., tailored recommendations. Accordingly, we propose that:
2024), trust in ChatGPT appeared as an important driver of its adoption.
H9. : Integrating generative AI models with ML platforms positively
The authors further discussed that, since data transparency is an
impacts the performance of citizen data scientists.
important concern among students, explainable AI models are essential
in the development of trust among users. Drawing upon the above dis-
3.3.2. Data visualization
cussion, we propose that:
Data visualization is a powerful way for data professionals to display
H6. The lack of explainability within generative AI models negatively data so that it can be interpreted easily (Olshannikova et al., 2015).
impacts the performance of citizen data scientists. Integrating visualizations into the core of generative AI models is crucial
4
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
as it enhances interpretability, facilitates effective communication, aids considered a potential threat in this context. When data sources are
decision-making, detects and mitigates biases, engages users, and adapts limited in diversity, quality, or relevance, the training and performance
to diverse domains. The visual tools make AI models more user-friendly, of AI models used to develop citizen data scientists’ capabilities may be
transparent, and versatile, aligning them with user needs and expecta- compromised. This can result in less effective training outcomes,
tions across various applications and industries. Moreover, visualiza- reduced ability to handle diverse real-world data, and potentially biased
tions enhance their ability to communicate data insights effectively to or inaccurate insights. Accordingly, we propose that:
non-technical stakeholders, bridging the gap between data experts and
H12. : Data source limitations within generative AI models negatively
decision-makers. Visualizations integrated into generative AI models are
impact the performance of citizen data scientists.
instrumental in developing better citizen data scientists. Citizen data
scientists can gain proficiency in data analysis and interpretation, According to the above discussion, Fig. 1 summarizes the SWOT
contributing meaningfully to diverse industries and contexts. Accord- analysis for the contribution of generative AI models on the performance
ingly, we propose that: of citizen data scientists within retail firms.
H10. : The visualization of generative AI models positively impacts the
4. Survey development and data collection
performance of citizen data scientists.
4.1. Survey development
3.4. Threats
The authors referred to both literature review and experts in the
3.4.1. Misinformation
development of the survey as the constructs we aim to explore were new
Misinformation is incorrect or misleading information (King and
and have not been explored in previous literature. To ensure that the
Wang, 2021; Bonnevie et al., 2021). The reply to users’ requests by
proposed items truly reflect the intended factors, we distributed a
generative AI models might not always be correct, and these models are
version of the survey to seven experts in data science, information sys-
capable of delivering misinformation, which users might trust as being
tems, information technology, and computer science. The experts were
correct (Derner and Batistič, 2023). This issue is critical in the context of
asked to comment on the items and respond to an initial version of the
retailing which involves a risk of financial loss. Misinformation in AI can
survey. Following their comments, we got the final version of the survey
occur through inaccuracies in training data (Qadir, 2023), the genera-
(Appendix A), which we used in the main data collection procedure.
tion of false content by AI models, and the manipulation of social media
The survey was sent to retail enterprises in Malaysia to achieve the
content. Generative AI models have the ability to generate text that
research objectives, and formal emails were issued to the selected or-
closely resembles human writing on a large scale. This increases the
ganizations. The data was collected in July 2023. A total of 268 valid
concerns about the potential for misinformation to be generated and
responses were collected. The demographic statistics of the participants
spread (Monteith et al., 2024). This will accordingly undermine trust
are analyzed in Table 1. The survey started with an explanation of the
and integrity in content produced by AI. The inadvertent generation or
research’s objectives and goals. Furthermore, a precise definition of the
propagation of false information by generative AI models can have
notion of citizen data scientists was supplied during the poll. Re-
negative impacts on insights and decision-making processes. For
spondents were additionally questioned about their job titles and the
example, as stated by (Monteith et al., 2024), the spread of online
types of retail enterprises that best described the companies they work
misinformation in all areas of medicine is particularly dangerous. The
for.
authors further highlight that to evaluate the accuracy of generative AI
output, still human intelligence is needed. In the educational context,
4.2. Demographic data analysis
(Whalen and Mouza, 2023) emphasized that there is a risk in assuming
that ChatGPT will produce credible, accurate, and trustworthy results,
In Table 1, the demographic data in this study reflects a diverse and
which could hinder or even harm learning. In the retail context, accurate
representative sample of participants. Regarding gender distribution,
and trustworthy information can be employed by citizen data scientists
the study includes slightly more female participants, 50.7 %, than male
to optimize strategies. In such a context, misinformation can lead to
participants, 49.3 %. Among age groups, 36 – 40 is the most populous,
erroneous insights. Hence, it is crucial to address and reduce the spread
comprising 26.9 % of the total, followed by 30 – 35 at 23.1 %. The age
of misinformation in order to ensure the reliability and effectiveness of
“less than 30” group represents 12.3 % of the participants, while the 41 –
generative AI models for training citizen data scientists. Based on the
45 and 46 – 50 groups account for 14.6 % and 16.4 %, respectively. The
above discussion, we aim to examine the following hypothesis:
“above 50” age group is the smallest, making up 6.7 % of the sample.
H11. : Misinformation within generative AI models negatively impacts Furthermore, the most common educational attainment is a Bachelor’s
the performance of citizen data scientists. degree, held by 82.1 % of the participants. Furthermore, 5.2 % of the
population possesses a Diploma, with another 7.5 % holding a Master’s
3.4.2. Data source limitations degree and an equivalent 5.2 % having achieved a Ph.D.
Data source quality and availability are paramount for AI models When examining the types of retail businesses, Electronics and
(Stöger et al., 2021). According to a study by (Stanula et al., 2018), the Technology Retail emerges as the most prominent category, constituting
effectiveness of machine learning models and their accuracy are heavily 23.9 % of the total, followed by Grocery Retail at 17.9 %. Fashion and
influenced by the objectives and the database at hand. Prior research Apparel Retail (10.4 %) and Furniture and Home Decor Retail (7.1 %)
(Sáez et al., 2021) showed that ensuring high data quality is crucial for also hold significant market shares. Other categories, including Phar-
predictive models, just like having a sufficient sample size. As training macies and Drug Retail (6.3 %) and Educational Software and Online
the AI models is performed based on the data, the availability of Learning Platforms (5.2 %), display varying levels of representation. The
high-quality data sources can significantly impact the accuracy and data also includes Beauty and Cosmetics Retail (3.7 %), Health and
reliability of these models. A generative AI model, such as ChatGPT, is Wellness Retail (3.4 %), Home Improvement Retail (3.4 %), Jewelry
no exception. Data source limitations can threaten these models. Such Retail (3.7 %), and Sporting Goods Retail (3.7 %). Furthermore, the
limitations can constrain the model’s ability to generalize and provide "Others" category accounts for 11.2 %.
reliable information. Ensuring access to robust data sources is a critical Regarding job titles, Data Analyst stands out, accounting for 23.5 %
step in enhancing the performance and ethical use of AI models. Data of the positions, followed by Business Analyst at 16.4 %. Other notable
source limitations can indeed impact the development of citizen data roles include Sales Analyst (8.2 %), HR Analyst (7.8 %), and Customer
scientists capabilities through generative AI models, and they can be Data Analyst (7.1 %), which are prominent within this domain. The data
5
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
Fig. 1. SWOT Analysis in the Development of Citizen Data Scientists by Generative AI Models.
also unveils various specialized positions, such as Financial Analyst Heterotrait-Monotrait Correlation Ratio (HTMT) were employed to
(6.7 %), Marketing Analyst (6.0 %), and Market Research Analyst confirm DV.
(4.9 %). Some roles, like Quality Assurance Analyst (4.1 %), Risk Ana- Overall, the findings of the analysis show that the measurement
lyst (3.4 %), and Environmental Analyst (1.5 %), have varying levels of model utilized in the study is reliable in terms of IC and CV. The in-
representation. The "Others" category (3.7 %) implies the presence of dicators under consideration are reliable, with indicators within each
less common job titles in this domain. Additionally, positions like Op- construct being tightly connected. Furthermore, the constructs encom-
erations Analyst (2.2 %), Product Data Analyst (2.2 %), and Supply pass a significant correlation in the observed items, supporting their CV.
Chain Analyst (2.2 %) contribute to the overall diversity of roles within These findings imply that the measurement model is well-built, and the
this context. indicators are reliable for assessing the constructs that are relevant to the
Regarding job experience, most individuals (47.0 %) possess 3–4 research study. Referring to the HTMT values in the presented matrix
years of experience, followed by those with 5 years or more (37.3 %). (Table 3), all values are significantly less than 1, showing high
The next most significant group comprises individuals with 1–2 years of discriminant validity between the corresponding constructs. These
experience, making up 10.4 % of the population, while those with less lower HTMT values imply that the constructs are empirically different
than 1 year of experience represent 5.2 % of the total. and do not have significant variance overlap.
5.1. Assessment of the outer model Since Sewall Wright (1921) proposed the inner model, researchers
have utilized it to inspect and examine the relationships between factors.
The survey items were tested for reliability and validity in three Accordingly, it has been used in various settings, with the social sciences
distinct domains: (1) Convergent Validity (CV), (2) Internal Consistency receiving special attention (Hair et al., 2011). As a result, PLS-SEM has
(IC), and (3) Discriminant Validity (DV). The initial evaluation included been used in multiple research areas and has been integrated into
confirming CV by checking outer loading values greater than 0.4 and various applications (Cruz-Jesus et al., 2019; Rodriguez et al., 2009). In
verifying that AVE measures were higher than 0.5, as advised by (Hair Jr the inner model analysis, we focus on two main tests: the coefficients of
et al., 2021). Cronbach’s alpha (CA) and Composite Reliability (CR) determination (R2) and the coefficients of paths (PC).
values higher than 0.7 were used to confirm the IC measure (see Table 4 below summarizes the results of path coefficient estimations
Table 2). Finally, Cross-Loadings (Appendix B) and the for various predictors concerning Citizen Data Scientists’ Performance
6
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
Table 1 Table 2
Demographic results of the participants (N=268). Reliability and Validity.
Item Description Frequency % Construct Item Outer CA CR AVE
Loadings
Gender Female 136 50.7 %
Male 132 49.3 % Citizen Data Scientist’s CDSP1 0.913 0.797 0.797 0.831
Age Less than 30 33 12.3 % Performance CDSP2 0.911
30 – 35 62 23.1 % Cost-Efficiency CEFF1 0.919 0.903 0.906 0.838
36 – 40 72 26.9 % CEFF2 0.925
41 – 45 39 14.6 % CEFF3 0.902
46 –50 44 16.4 % Continuous Learning CL1 0.790 0.766 0.774 0.589
Above 50 18 6.7 % CL2 0.695
Education Bachelor 220 82.1 % CL3 0.761
Diploma 14 5.2 % CL4 0.819
Master 20 7.5 % Dependence on External DER1 0.873 0.812 0.811 0.642
Ph.D. 14 5.2 % Resources DER2 0.851
Type of Retail Beauty and Cosmetics Retail 10 3.7 % DER3 0.857
Business Educational Software and Online 14 5.2 % Data Security and Privacy DSDP1 0.841 0.720 0.752 0.638
Learning Platforms Concerns DSDP2 0.843
Electronics and Technology Retail 64 23.9 % DSDP3 0.706
Fashion and Apparel Retail 28 10.4 % Data Source Limitations DSL1 0.906 0.742 0.750 0.794
Furniture and Home Decor Retail 19 7.1 % DSL2 0.876
Grocery Retail 48 17.9 % Data Visualization DV1 0.681 0.842 0.992 0.735
Health and Wellness Retail 9 3.4 % DV2 0.937
Home Improvement Retail 9 3.4 % DV3 0.930
Jewelry Retail 10 3.7 % Ease of Use EOU1 0.846 0.812 0.811 0.642
Pharmacies and Drug Retail 17 6.3 % EOU2 0.827
Sporting Goods Retail 10 3.7 % EOU3 0.810
Others 30 11.2 % EOU4 0.716
Job Title Business Analyst 44 16.4 % Innovative Insights IINS1 0.937 0.826 0.845 0.851
Customer Data Analyst 19 7.1 % IINS2 0.908
Data Analyst 63 23.5 % Integration with ML IML1 0.916 0.791 0.794 0.827
Environmental Analyst 4 1.5 % Platforms IML2 0.903
Financial Analyst 18 6.7 % Lack of Explainability LEXP1 0.875 0.747 0.760 0.797
HR Analyst 21 7.8 % LEXP2 0.911
Market Research Analyst 13 4.9 % Misinformation MINF1 0.957 0.884 0.909 0.895
Marketing Analyst 16 6.0 % MINF2 0.936
Operations Analyst 6 2.2 % Tendency to Generate TGBC1 0.900 0.852 0.872 0.771
Product Data Analyst 6 2.2 % Biased Content TGBC2 0.830
Quality Assurance Analyst 11 4.1 % TGBC3 0.903
Risk Analyst 9 3.4 %
Sales Analyst 22 8.2 %
Supply Chain Analyst 6 2.2 % this is not the case in studies of performance predictors. Still, the result
Others 10 3.7 % of R2 can be described as moderate. According to the above results, the
Experience Less than 1 year 14 5.2 %
1–2 years 28 10.4 %
final research model is shown in Fig. 2.
3–4 years 126 47.0 %
5 years or more 100 37.3 % 6. Conclusion
7
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
Table 3
HTTM Test.
CDSP CL CEFF DSDP DSL DV DER EOU IINS IML LEXP MINF TGBC
CDSP
CL 0.700
CEFF 0.548 0.422
DSDP 0.808 0.658 0.426
DSL 0.506 0.397 0.625 0.295
DV 0.150 0.068 0.059 0.107 0.110
DER 0.578 0.526 0.596 0.465 0.671 0.076
EOU 0.720 0.663 0.681 0.600 0.751 0.086 0.692
IINS 0.581 0.513 0.482 0.453 0.552 0.063 0.649 0.765
IML 0.399 0.412 0.650 0.281 0.777 0.127 0.661 0.668 0.522
LEXP 0.568 0.665 0.274 0.545 0.234 0.084 0.311 0.458 0.264 0.379
MINF 0.459 0.540 0.572 0.385 0.587 0.070 0.551 0.688 0.584 0.609 0.493
TGBC 0.837 0.557 0.605 0.595 0.589 0.085 0.656 0.691 0.548 0.547 0.456 0.611
8
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
study by Yoon and Kim (Yoon and Kim, 2022). The results, on the other as the work by (Zhang et al., 2023). In addition, one of the major
hand, indicate that continuous learning influences the performance of strengths of generative AI models is the constant adaption of these
citizen data scientists (β=0.132, t-value=2.397, p-value=0.017). It also models based on new data and interactions. This ability can highly
supports the influence of innovative insights (β=0.101, t-value=1.987, impact the acceptance of these models in retail sector as these models
p-value=0.047). These findings encourage further investigation into the can learn and generate innovative insights from the updated data.
factors that genuinely drive citizen data scientists’ performance and With respect to the opportunities, generative AI models have the
emphasize the need to consider the study context’s specific dynamics. potential to improve data analysis and decision-making in the retail
sector. Their remarkable capacity to mimic human behavior and
6.2. Managerial implications generate content presents an array of opportunities for marketing
research and practice (Hermann, 2022). The study conducted by Malloy
The findings of this work have several implications for practitioners and Gonzalez (Malloy and Gonzalez, 2024) revealed the importance of
and managers in the retail industry. We discuss these findings based on generative models in predicting actions in decision-modeling research.
the SWOT analysis and the assessment of the hypotheses. The findings in the previous literature suggest also that generative AI
can significantly enhance personalized marketing communication
6.2.1. Strengths and opportunities (Hermann and Puntoni, 2024) and mass personalization of content
In the case of strengths, as we discussed, generative AI models can (Hermann, 2022). In the tourism context, generative AI applications
offer retail firms a promising future in data science. These models can have been able to optimize the management of tourism companies and
enhance data science (Hassani and Silva, 2023), as they handle un- boost customer engagement (Mondal et al., 2023).
structured text well, and help data scientists to improve their workflows In this study, however, we further confirmed that generative AI
and achieve better results (Hassani and Silva, 2023). In addition, models can improve retail organizations from a data analytic perspec-
text-based data science applications like social media sentiment analysis tive. They can contribute to the development of citizen data scientists, as
benefit from AI models’ ability to analyze large amounts of text (like demonstrated by the results of this research. In addition, we found that
online reviews) and extract meaningful information (Krugmann and through the integration of these models with machine learning plat-
Hartmann, 2024). In the retail sector, this capability of generative AI forms, these models can be more efficient. Further, through the
models could be important due to the fact that online customer review enhancement of data visualization in these models, the explainability of
analysis through the incorporation of these models will be fast and the presented decisions could be improved. In the work by So (So, 2020),
scalable. Further, in sentiment analysis, which is an important task in the author concluded that the prediction mechanism of a machine
business research, generative AI models could be effective. This strength learning model can be uncovered by visualization of particular obser-
is highlighted by a recent study by (Krugmann and Hartmann, 2024). vations. Models that generate explanations in both visual and linguistic
The authors showed that the emergence of generative AI tools will have forms have the potential to shed light on phenomena that were previ-
a substantial impact on sentiment analysis research. They revealed that ously unexplained. Additionally, they can provide user-friendly in-
Large Language Models (LLMs) are effective and accurate in sentiment terfaces for novice users. Upon the results of previous works, it is
classification. This makes them very suitable for immediate integration expected that visualization tools make the AI models robust in terms of
into business operations. Using textaul data analysis, these models can explainability and interpretability. In addition, the collaboration be-
help recommendtion agents in the prediction of users’ behaviour, such tween these models and explainable machine learning can broaden the
9
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
capabilities of generative AI models in the decision-making process. This could be opportunities for enhancement and expanding collaboration
will accordingly increase the acceptance of these models in the retail with other experts in the security field. The study’s findings highlight
sector. These opportunities suggest that data-driven insights, combined the importance of implementing robust encryption and access control
with AI capabilities, can enable more personalized and immersive ex- mechanisms to protect sensitive data during training and deployment. In
periences for users. By harnessing these opportunities, businesses can addition, to mitigate biased content generation in a generative AI model,
adapt more swiftly to market trends and thrive in a rapidly changing bias detection and mitigation techniques must be developed (Nemani
landscape (Kar et al., 2023). et al., 2023; Hort et al., 2023). The survey performed by (Hort et al.,
2023) has widely investigated these techniques.
6.2.2. Threats and weaknesses According to the outcome of the model evaluation, it was found that
This study also revealed some threats and weaknesses of the inte- explainability issues can play an important role in the performance of
gration of these models in retail firms, aiming to promote citizen data generative AI models. As discussed by the work in (Mahbooba et al.,
scientists’ performance. 2021), XAI (Explainable AI) enables users to understand and have trust
Despite the fact that generative AI models have a great deal of po- in the results and outputs generated by machine learning algorithms.
tential, there are threats and weaknesses that must be carefully Arrieta et al (Arrieta et al., 2020). highlighted that XAI is extremely
addressed. The major threat was misinformation; misinformation has essential for the successful implementation of responsible AI. This issue
emerged as a serious issue in the 21st century. Generative AI is boosting focuses on deploying AI techniques in organizations while prioritizing
the spread of misinformation. This issue is comprehensively explained in fairness, model explainability, and accountability. In this work, we also
the study by Shin et al (Shin et al., 2024). Although AI-generated text has found a negative relationship between the lack of explainability and the
become increasingly popular, experts have raised concerns about the performance of citizen data scientists. It is crucial to incorporate ethical
potential for widespread dissemination of misinformation which is considerations into the development and deployment of generative AI
posed as credible scientific information (Shin et al., 2024). As the au- models. This will ensure that their usage is fair, safe, and beneficial for
thors highlight, due to the inherent limitations of AI models, the society as a whole. In order to achieve that, it would be beneficial for
opaqueness of their algorithms, and the possibility of contamination in generative AI to have explainability. Including this measure of genera-
their data sources, verifying the accuracy of the content that is generated tive AI models, the transparency in the model’s decision-making process
by generative AI models is a complex and difficult task. While the issue could be improved (Saeed and Omlin, 2023). Therefore, it is recom-
of misinformation has been present since the beginning of human civi- mended that in case these models fail to satisfy any of the criteria
lization, AI is making these challenges worse. imposed in order to declare them transparent, a different method must
Regarding the weaknesses, firstly, we addressed the issue of data be developed and implemented to the model in order to explain the
security and privacy concerns associated with generative AI models as decisions that they have made. However, since responsible AI integrates
an important weakness (Hypothesis 5). Second, the potential for biased model explainability with privacy and security by design, it is essential
content generation in generative AI models causes another significant to deeply consider the benefits and risks of considering explainability in
challenge in the development of citizen data scientists’ skills. This was generative AI models, especially in scenarios involving sensitive
discussed in the seventh hypothesis. Third, lack of explainability is information.
another weakness in generative AI models which was investigated in the
sixth hypothesis. The fifth hypothesis was supported to highlight the 6.3. Limitations and future research
potential risks associated with handling sensitive information by
generative AI models. Similar to other AI systems, data security and In the context of this research, it is prudent to acknowledge specific
privacy concerns in generative AI models (Golda et al., 2024; Chen and limitations, which can serve as valuable areas for future research
Esmaeilzadeh, 2024) are found to have a major impact on the effective exploration and improvement. First, this study focuses on the retail
use of these models in retail firms. In retail businesses, vast amounts of sector, so other business sectors should be investigated to better refine
customer data (e.g., personal information) could be collected. Security the strengths, weaknesses, opportunities, and threats of these tools. In
threats, including data poisoning (Steinhardt et al., 2017) and prompt addition, this research primarily focuses on a specific geographic region;
injection (Mudarova and Namiot, 2024), on such data, can result in Malaysia. The findings and conclusions can be further validated in
inaccurate decisions. In addition, malicious attacks (Schneider et al., various countries. Including diverse views is crucial in promoting the
2011) can exploit its impressive generation ability to extract informa- acceptance and proficient application of generative AI models to
tion from personal data. Manipulating input data can deceive AI models, improve data science expertise within retail enterprises. Moreover,
causing incorrect or biased sentiment analysis. In addition, considering assessing the long-term impact on the performance and empowerment
online customer reviews, malicious actors can submit intentionally of data science skills acquired through generative AI models is crucial.
crafted reviews designed to confuse the generative AI models. These Future research should investigate strategies to ensure that citizen data
reviews are constructed to exploit specific weaknesses in the AI’s algo- scientists can effectively apply their knowledge over time, bridging the
rithm. In addition, large volumes of fake reviews can be generated to gap between initial training and sustainable expertise. As innovative
artificially inflate or deflate the perceived sentiment of a product or tools, the impact of the deployment of generative AI models on gaining a
service. The AI model might get trained on these fake reviews, leading to competitive advantage within the market could also be examined in
biased sentiment analysis. future research. In this work, we only considered the impacts of the
We found a negative impact of biased content generation on the proposed factors on the performance of citizen data scientists. Future
performance of citizen data scientists in the retail sector. Our finding can work might investigate the relationships between the proposed factors
be explained by the fact that biased content generation in generative AI to gain more insights into how these factors influence each other.
models can indeed lead to a loss of trust in AI models (Ferrara, 2023) As
a result, low adoption rates can occur in these systems. For example, Funding
gender bias in AI has emerged as a pressing concern (Nemani et al.,
2023) which could potentially occur in generative AI models in the No funding was received for conducting this study.
decision-making process. In fact, gender bias can lead to unequal
treatment of customers which can harm satisfaction and loyalty. CRediT authorship contribution statement
Further, if these models are biased, they might recommend
gender-specific products incorrectly, leading to reduced sales and Mehrbakhsh Nilashi: Writing – original draft, Validation, Formal
customer dissatisfaction. For AI model developers, these weaknesses analysis, Conceptualization. Rabab Ali Abumalloh: Writing – review &
10
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
editing, Writing – original draft, Validation, Methodology, Formal interests or personal relationships that could have appeared to influence
analysis, Data curation, Conceptualization. Garry Wei Han Tan: the work reported in this paper.
Writing – review & editing, Validation. Keng Boon Ooi: Writing – re-
view & editing, Validation, Resources, Project administration. Hing Kai Data Availability
Chan: Writing – review & editing, Validation.
Data will be made available on request.
Declaration of Competing Interest
Construct Items
Citizen Data Scientist’s Performance CDSP1: I am more knowledgeable than those who work in my field.
CDSP2: My performance satisfies my manager’s standards.
Cost-Efficiency CEFF1: I believe that generative AI technology is a cost-efficient tool.
CEFF2: In my opinion, generative AI can significantly enhance the cost efficiency of data analysis.
CEFF3: I perceive generative AI as a cost-effective solution for various analysis tasks.
Continuous Learning CL1: Generative AI has the capability of continuous learning through updating with new data.
CL2: Generative AI has the capabilities of continuous learning through adapting to changing language patterns.
CL3: Generative AI has the capabilities of continuous learning through refining its algorithms.
CL4: Generative AI has the capability of continuous learning through adapting to users’ needs.
Dependence on External Resources (Reverse DER1: I don’t consider the dependence on external resources as a major factor affecting the generative AI models
Scale) DER2: I don’t believe that the dependence on external resources significantly affects the effective functioning of generative AI
models.
DER3: I think that the need for external resources doesn’t influence how well generative AI models function.
Data Security and Privacy Concerns (Reverse DSDP1: In my opinion; security and privacy are critical in generative AI models
scale) DSDP2: I consider security and privacy as top priorities when it comes to generative AI models
DSDP3: I value the role of security and privacy measures in generative AI models for safeguarding sensitive data
Data Source Limitations DSL1: Generative AI models often have unlimited data sources available for their training
(Reverse Scalr) DSL2: I believe that data sources for generative AI models are readily available and not frequently restricted.
Data Visualization DV1: In my opinion, data visualization of generative AI models is a crucial aspect of effective data analysis.
DV2: I perceive data visualization of generative AI models as a powerful means to uncover hidden patterns and trends in data.
DV3: Data visualization of generative AI models enhances the ability to interpret and extract valuable insights from data.
Ease of Use EOU1: Generative AI solutions are user-friendly.
EOU2: Generative AI solutions are easy to use
EOU3: Generative AI tools are known for their user-friendliness.
EOU4: It is not difficult to use generative AI tools for data analysis.
Innovative Insights IINS1: Generative AI models have the capability to provide unique and valuable insights when analyzing and generating natural
language text.
IINS2: Generative AI models are capable of producing creative and contextually relevant insights that were previously
unexplored.
Integration with ML Platforms IML1: I don’t consider generative AI models as valuable components within machine learning platforms for different analysis
(Reverse Scale) tasks.
IML2: The integration of generative AI with machine learning platforms has minimal impact on the performance and accuracy of
data analysis.
Lack of Explainability LEXP1: In my opinion, explainability makes the reasoning behind AI system actions transparent and understandable to humans.
(Reverse Scale) LEXP2: I think explainability helps people understand why AI systems make their decisions.
Misinformation MINF1: In my view, generative AI models have the potential to facilitate the rapid spread of misinformation.
MINF2: Generative AI models can generate and propagate false information at a large scale.
Tendency to Generate Biased Content (Reverse TGBC1: In my opinion, ensuring that Generative AI systems do not produce biased content is a fundamental requirement.
Scale) TGBC2: I believe that mitigating the tendency of Generative AI to generate biased content is essential for ensuring equitable
technology.
TGBC3: Addressing the tendency of Generative AI to produce biased content is crucial for promoting fairness and ethical AI use.
CDSP CEFF CL DER DSDP DSL DV EOU IINS IML LEXP MINF TGBC
CDSP1 0.913 0.42 0.489 0.44 0.588 0.352 0.121 0.531 0.394 0.269 0.415 0.338 0.642
CDSP2 0.911 0.43 0.512 0.451 0.558 0.36 0.148 0.534 0.473 0.31 0.389 0.372 0.633
CEFF1 0.453 0.919 0.395 0.533 0.364 0.494 0.043 0.593 0.405 0.523 0.222 0.53 0.536
CEFF2 0.408 0.925 0.318 0.469 0.349 0.457 0.07 0.507 0.352 0.49 0.214 0.449 0.452
CEFF3 0.417 0.902 0.252 0.447 0.267 0.458 0.05 0.495 0.395 0.493 0.182 0.428 0.47
CL1 0.421 0.306 0.79 0.306 0.383 0.203 -0.045 0.427 0.231 0.261 0.434 0.326 0.307
CL2 0.378 0.245 0.695 0.283 0.359 0.285 -0.011 0.378 0.367 0.217 0.326 0.304 0.315
CL3 0.402 0.249 0.761 0.371 0.318 0.243 0.01 0.385 0.298 0.269 0.309 0.354 0.368
CL4 0.477 0.283 0.819 0.352 0.439 0.19 0.046 0.443 0.362 0.24 0.488 0.394 0.377
DER1 0.515 0.521 0.412 0.873 0.418 0.544 0.027 0.53 0.466 0.477 0.29 0.466 0.564
DER2 0.358 0.374 0.34 0.851 0.288 0.406 -0.016 0.434 0.462 0.433 0.14 0.38 0.409
DER3 0.345 0.446 0.335 0.857 0.242 0.411 -0.07 0.51 0.455 0.477 0.211 0.384 0.451
(continued on next page)
11
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
(continued )
CDSP CEFF CL DER DSDP DSL DV EOU IINS IML LEXP MINF TGBC
DSDP1 0.567 0.28 0.423 0.341 0.841 0.254 0.085 0.424 0.277 0.181 0.361 0.248 0.441
DSDP2 0.545 0.378 0.393 0.322 0.843 0.187 0.095 0.431 0.319 0.219 0.307 0.253 0.403
DSDP3 0.36 0.171 0.363 0.24 0.706 0.076 0.061 0.263 0.25 0.113 0.302 0.243 0.28
DSL1 0.369 0.489 0.284 0.477 0.202 0.906 -0.043 0.515 0.392 0.523 0.154 0.448 0.419
DSL2 0.324 0.424 0.241 0.488 0.209 0.876 -0.006 0.523 0.377 0.54 0.157 0.401 0.415
DV1 0.035 0.02 -0.017 -0.081 -0.007 -0.118 0.681 -0.088 -0.073 -0.137 -0.061 -0.092 -0.096
DV2 0.162 0.082 0.008 0.017 0.109 0.033 0.937 0.034 0.061 0.036 -0.025 0.011 0.06
DV3 0.126 0.027 -0.001 -0.035 0.1 -0.076 0.93 -0.054 0.003 -0.098 -0.046 -0.05 -0.032
EOU1 0.471 0.515 0.482 0.513 0.361 0.514 0.053 0.846 0.628 0.476 0.226 0.521 0.447
EOU2 0.442 0.453 0.352 0.458 0.347 0.476 -0.039 0.827 0.521 0.47 0.251 0.414 0.463
EOU3 0.431 0.54 0.299 0.45 0.323 0.487 -0.035 0.81 0.49 0.442 0.169 0.479 0.441
EOU4 0.51 0.366 0.543 0.416 0.481 0.385 -0.029 0.716 0.37 0.324 0.501 0.448 0.492
IINS1 0.474 0.422 0.399 0.476 0.357 0.392 0.031 0.594 0.937 0.377 0.196 0.479 0.449
IINS2 0.396 0.349 0.353 0.517 0.291 0.407 0.017 0.561 0.908 0.401 0.188 0.441 0.396
IML1 0.299 0.464 0.316 0.487 0.226 0.575 -0.05 0.48 0.353 0.916 0.293 0.443 0.406
IML2 0.278 0.537 0.266 0.494 0.173 0.506 -0.019 0.491 0.413 0.903 0.236 0.484 0.404
LEXP1 0.36 0.194 0.404 0.192 0.308 0.149 0.022 0.305 0.172 0.265 0.875 0.365 0.318
LEXP2 0.423 0.209 0.506 0.266 0.407 0.162 -0.087 0.352 0.199 0.258 0.911 0.353 0.332
MINF1 0.401 0.499 0.473 0.489 0.312 0.461 -0.036 0.551 0.471 0.482 0.396 0.957 0.532
MINF2 0.329 0.473 0.372 0.422 0.266 0.442 -0.012 0.556 0.477 0.481 0.361 0.936 0.459
TGBC1 0.672 0.432 0.361 0.482 0.439 0.395 -0.003 0.515 0.377 0.36 0.319 0.395 0.9
TGBC2 0.497 0.44 0.42 0.493 0.367 0.397 -0.008 0.479 0.407 0.414 0.307 0.508 0.83
TGBC3 0.651 0.531 0.407 0.517 0.448 0.441 0.028 0.53 0.434 0.41 0.335 0.504 0.903
References Currie, G.M., Hawk, K.E., Rohren, E.M., 2024. Generative artificial intelligence biases,
limitations and risks in nuclear medicine: an argument for appropriate use
framework and recommendations. Semin. Nucl. Med.
Abumalloh, R.A., Ibrahim, O., Nilashi, M.J.Ti.S., 2020. Loyalty of young female Arabic
Davenport, T.H., Patil, D., 2012. Data scientist. Harv. Bus. Rev. 90, 70–76.
customers towards recommendation agents: a new model for B2C. E-Commer. 61,
Del Ser, J., Barredo-Arrieta, A., Díaz-Rodríguez, N., Herrera, F., Saranti, A., Holzinger, A.,
101253.
2024. On generating trustworthy counterfactual explanations. Inf. Sci. 655, 119898.
Adamopoulou, E., Moussiades, L., 2020. Chatbots: history, technology, and applications.
E. Derner, K. Batistič, Beyond the Safeguards: Exploring the Security Risks of Chatgpt,
Mach. Learn. Appl. 2, 100006.
arXiv preprint arXiv:2305.08005, (2023).
Adam, M., Wessel, M., Benlian, A., 2021. AI-based chatbots in customer service and their
Dorsey, D.W., 2019. Big Data, Data Science, and Career Pathways. Career Pathw. Sch.
effects on user compliance. Electron. Mark. 31 (2), 427–445.
Retire. 239.
Ahani, A., Nilashi, M., Ibrahim, O., Sanzogni, L., Weaven, S., 2019. Market segmentation
Ferrara, E., 2023. Fairness and bias in artificial intelligence: a brief survey of sources,
and travel choice prediction in Spa hotels through TripAdvisor’s online reviews. Int.
impacts, and mitigation strategies. Science 6, 3.
J. Hosp. Manag. 80, 52–77.
Fischer, J.E., 2023. Generative AI considered harmful. Proc. 5th Int. Conf. Conversat. Use
Akter, S., Hossain, M.A., Sajib, S., Sultana, S., Rahman, M., Vrontis, D., McCarthy, G.,
Interfaces 1–5.
2023. A framework for AI-powered service innovation capability: review and agenda
Floyd, K., Freling, R., Alhoqail, S., Cho, H.Y., Freling, T., 2014. How online product
for future research. Technovation 125, 102768.
reviews affect retail sales: a meta-analysis. J. Retail. 90, 217–232.
Alalwan, A.A., Dwivedi, Y.K., Rana, N.P., 2017. Factors influencing adoption of mobile
Fuchs, K., 2023. Exploring the opportunities and challenges of NLP models in higher
banking by Jordanian bank customers: extending UTAUT2 with trust. Int. J. Inf.
education: is Chat GPT a blessing or a curse? Front. Educ., Front., 1166682
Manag. 37, 99–110.
Furstenau, L.B., Leivas, P., Sott, M.K., Dohan, M.S., López-Robles, J.R., Cobo, M.J.,
Albayati, H., 2024. Investigating undergraduate students’ perceptions and awareness of
Bragazzi, N.L., Choo, K.-K.R., 2023. Big data in healthcare: conceptual network
using ChatGPT as a regular assistance tool: a user acceptance perspective study.
structure, key challenges and opportunities. Digital Communications and Networks.
Comput. Educ. Artif. Intell. 6, 100203.
Gala, D., Makaryus, A.N., 2023. The utility of language models in cardiology: a narrative
Almulla, M.A., 2024. Investigating influencing factors of learning satisfaction in AI
review of the benefits and concerns of ChatGPT-4. Int. J. Environ. Res. Public Health
ChatGPT for research: University students perspective. Heliyon 10, e32220.
20, 6438.
Alpar, P., Schulz, M., 2022. More data analysis with citizen data scientists? World
Gefen, D., Keil, M., 1998. The impact of developer responsiveness on perceptions of
Conference on Information Systems and Technologies. Springer, pp. 122–130.
usefulness and ease of use: an extension of the technology acceptance model. ACM
An, H., Li, X., Huang, Y., Wang, W., Wu, Y., Liu, L., Ling, W., Li, W., Zhao, H., Lu, D.,
Sigmis Database: Database Adv. Inf. Syst. 29, 35–49.
Liu, Q., Jiang, G., 2024. A new ChatGPT-empowered, easy-to-use machine learning
Golda, A., Mekonen, K., Pandey, A., Singh, A., Hassija, V., Chamola, V., Sikdar, B., 2024.
paradigm for environmental science. Eco-environ. Health 3, 131–136.
Privacy and Security Concerns in Generative AI: A Comprehensive Survey. IEEE
Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A.,
Access.
García, S., Gil-López, S., Molina, D., Benjamins, R., 2020. Explainable Artificial
Gröger, C., 2018. Building an Industry 4.0 analytics platform: practical challenges,
Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward
approaches and future research directions. Datenbank-Spektrum 18, 5–14.
responsible AI. Inf. Fusion 58, 82–115.
Hair, J., Hult, G.T.M., Ringle, C.M., Sarstedt, M., 2013. A Primer on Partial Least Squares
Baabdullah, A.M., 2024. Generative conversational AI agent for managerial practices:
Structural Equation Modeling. SAGE Publications Inc, Thousand Oaks, United States.
The role of IQ dimensions, novelty seeking and ethical concerns. Technol. Forecast.
Hair, J.F., Ringle, C.M., Sarstedt, M., 2011. PLS-SEM: Indeed a Silver Bullet. J. Mark.
Soc. Change 198, 122951.
Theory Pract. 19, 139–152.
Bellini, P., Palesi, L.A.I., Nesi, P., Pantaleo, G., 2023. Multi clustering recommendation
Hair Jr, J., Hair Jr, J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M., 2021. A primer on
system for fashion retail. Multimed. Tools Appl. 82, 9989–10016.
partial least squares structural equation modeling (PLS-SEM). Sage publications.
Bonnevie, E., Sittig, J., Smyser, J., 2021. The case for tracking misinformation the way
Han, R., Lam, H.K., Zhan, Y., Wang, Y., Dwivedi, Y.K., Tan, K.H., 2021. Artificial
we track disease. Big Data Soc. 8, 20539517211013867.
intelligence in business-to-business marketing: a bibliometric analysis of current
Bouteraa, M., Chekima, B., Thurasamy, R., Bin-Nashwan, S.A., Al-Daihani, M.,
research status, development and future directions. Ind. Manag. Data Syst. 121,
Baddou, A., Sadallah, M., Ansar, R., 2024. Open innovation in the financial sector: a
2467–2497.
mixed-methods approach to assess bankers’ willingness to embrace open-AI
Han, Z., Battaglia, F., Udaiyar, A., Fooks, A., Terlecky, S.R., 2024. An explorative
ChatGPT. J. Open Innov. Technol. Mark. Complex. 10, 100216.
assessment of ChatGPT as an aid in medical education: use it with caution. Med.
Bull, J.W., Jobstvogt, N., Böhnke-Henrichs, A., Mascarenhas, A., Sitas, N., Baulcomb, C.,
Teach. 46, 657–664.
Lambini, C.K., Rawlins, M., Baral, H., Zähringer, J., 2016. Strengths, weaknesses,
Hassani, H., Silva, E.S., 2023. The role of ChatGPT in data science: how ai-assisted
opportunities and threats: a SWOT analysis of the ecosystem services framework.
conversational interfaces are revolutionizing the field. Big Data Cogn. Comput. 7, 62.
Ecosyst. Serv. 17, 99–111.
Hermann, E., 2022. Artificial intelligence and mass personalization of communication
Carvalho, I., Ivanov, S., 2023. ChatGPT for Tourism: Applications, Benefits and Risks.
content—An ethical and literacy perspective. N. Media Soc. 24, 1258–1277.
Tourism Review.
Hermann, E., Puntoni, S., 2024. Artificial intelligence and consumer behavior: from
Chen, Y., Esmaeilzadeh, P., 2024. Generative AI in medical practice: in-depth exploration
predictive to generative AI. J. Bus. Res. 180, 114720.
of privacy and security challenges. J. Med. Internet Res. 26, e53008.
Hort, M., Chen, Z., Zhang, J.M., Harman, M., Sarro, F., 2023. Bias mitigation for machine
Cruz-Jesus, F., Pinheiro, A., Oliveira, T., 2019. Understanding CRM adoption stages:
learning classifiers: A comprehensive survey. ACM J. Responsible Comput.
empirical analysis building on the TOE framework. Comput. Ind. 109, 1–13.
12
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
Huang, H., Zheng, O., Wang, D., Yin, J., Wang, Z., Ding, S., Yin, H., Xu, C., Yang, R., Nilashi, M., Keng Boon, O., Tan, G., Lin, B., Abumalloh, R., 2023. Critical data challenges
Zheng, Q., 2023. ChatGPT for shaping the future of dentistry: the potential of multi- in measuring the performance of sustainable development goals: solutions and the
modal large language model. Int. J. Oral. Sci. 15, 29. role of big-data analytics. Harv. Data Sci. Rev. 5, 3–4.
Kangaspunta, J., Liesiö, J., Salo, A., 2012. Cost-efficiency analysis of weapon system Niu, Y., Ying, L., Yang, J., Bao, M., Sivaparthipan, C., 2021. Organizational business
portfolios. Eur. J. Oper. Res. 223, 264–275. intelligence and decision making using big data analytics. Inf. Process. Manag. 58,
Kar, A.K., Varsha, P., Rajan, S., 2023. Unravelling the impact of generative artificial 102725.
intelligence (GAI) in industrial applications: a review of scientific and grey Noguerol, T.M., Paulano-Godino, F., Martín-Valdivia, M.T., Menias, C.O., Luna, A., 2019.
literature. Glob. J. Flex. Syst. Manag. 24, 659–689. Strengths, weaknesses, opportunities, and threats analysis of artificial intelligence
Karahanna, E., Straub, D.W., 1999. The psychological origins of perceived usefulness and and machine learning applications in radiology. J. Am. Coll. Radiol. 16, 1239–1247.
ease-of-use. Inf. Manag. 35, 237–250. Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M.E., Ruggieri, S.,
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Turini, F., Papadopoulos, S., Krasanakis, E., 2020. Bias in data-driven artificial
Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., 2023. ChatGPT for good? On intelligence systems—an introductory survey. Wiley Interdiscip. Rev. Data Min.
opportunities and challenges of large language models for education. Learn. Individ. Knowl. Discov. 10, e1356.
Differ. 103, 102274. Okuda, T., Shoda, S., 2018. AI-based chatbot service for financial industry. Fujitsu Sci.
Kava, H., Spanaki, K., Papadopoulos, T., Despoudi, S., Rodriguez-Espindola, O., Tech. J. 54, 4–8.
Fakhimi, M., 2021. Data analytics diffusion in the UK renewable energy sector: an Olshannikova, E., Ometov, A., Koucheryavy, Y., Olsson, T., 2015. Visualizing Big Data
innovation perspective. Ann. Oper. Res. 1–26. with augmented and virtual reality: challenges and research agenda. J. Big Data 2,
Khan, S., Rabbani, M.R., 2021. Artificial intelligence and NLP-based chatbot for islamic 1–27.
banking and finance. Int. J. Inf. Retr. Res. (IJIRR) 11, 65–77. Orden-Mejia, M., Huertas, A., 2022. Analysis of the attributes of smart tourism
Kim, M., Zimmermann, T., DeLine, R., Begel, A., 2016. The emerging role of data technologies in destination chatbots that influence tourist satisfaction. Curr. Issues
scientists on software development teams. Proc. 38th Int. Conf. Softw. Eng. 96–107. Tour. 25, 2854–2869.
King, K.K., Wang, B., 2021. Diffusion of real versus misinformation during a crisis event: Palanica, A., Flaschner, P., Thommandram, A., Li, M., Fossat, Y., 2019. Physicians’
a big data-driven approach. Int. J. Inf. Manag., 102390 perceptions of chatbots in health care: cross-sectional web-based survey. J. Med.
Krugmann, J.O., Hartmann, J., 2024. Sentiment analysis in the age of generative AI. Internet Res. 21, e12887.
Cust. Needs Solut. 11, 3. Pianykh, O.S., Langs, G., Dewey, M., Enzmann, D.R., Herold, C.J., Schoenberg, S.O.,
Kumar, A., Gupta, N., Bapat, G., 2023. Who is making the decisions? How retail Brink, J.A., 2020. Continuous learning AI in radiology: implementation principles
managers can use the power of ChatGPT. J. Bus. Strategy. and early applications. Radiology 297, 6–14.
Lai, C.Y., Cheung, K.Y., Chan, C.S., 2023. Exploring the role of intrinsic motivation in Pursnani, V., Sermet, Y., Kurt, M., Demir, I., 2023. Performance of ChatGPT on the US
ChatGPT adoption to support active learning: an extension of the technology fundamentals of engineering exam: comprehensive assessment of proficiency and
acceptance model. Comput. Educ. Artif. Intell. 5, 100178. potential implications for professional environmental engineering practice. Comput.
Lai, C.Y., Cheung, K.Y., Chan, C.S., Law, K.K., 2024. Integrating the adapted UTAUT Educ. Artif. Intell. 5, 100183.
model with moral obligation, trust and perceived risk to predict ChatGPT adoption Qadir, J., 2023. Engineering Education in the Era of ChatGPT: Promise and Pitfalls of
for assessment support: a survey with students. Comput. Educ. Artif. Intell. 6, Generative AI for Education. 2023 IEEE Global Engineering Education Conference
100246. (EDUCON). IEEE, pp. 1–9.
Lawrence, D., 2019. Non-data scientists: the evolving role of clinical data management. Qin, S.J., Chiang, L.H., 2019. Advances and opportunities in machine learning for
Appl. Clin. Trials 28, 25-25. process data analytics. Comput. Chem. Eng. 126, 465–473.
LeCun, Y., Bengio, Y., Hinton, G., 2015. Deep learning. nature 521, 436–444. Rajnoha, R., Hadač, J., 2021. Strategic key elements in big data analytics as driving
Lencastre, P., Gjersdal, M., Gorjão, L.R., Yazidi, A., Lind, P.G., 2023. Modern AI versus forces of IoT manufacturing value creation: a challenge for research framework. IEEE
century-old mathematical models: How far can we go with generative adversarial Trans. Eng. Manag.
networks to reproduce stochastic processes? Phys. D Nonlinear Phenom. 453, Ranjan, J., Foropon, C., 2021. Big data analytics in building the competitive intelligence
133831. of organizations. Int. J. Inf. Manag. 56, 102231.
Lima, G., Grgić-Hlača, N., Jeong, J.K., Cha, M., 2022. The conflict between explainable Raschka, S., Patterson, J., Nolet, C., 2020. Machine learning in python: main
and accountable decision-making algorithms. Proc. 2022 ACM Conf. Fairness Acc. developments and technology trends in data science, machine learning. Artif. Intell.,
Transpar. 2103–2113. Inf. 11, 193.
Line, N.D., Dogru, T., El-Manstrly, D., Buoye, A., Malthouse, E., Kandampully, J., 2020. Rodriguez, R.R., Saiz, J.J.A., Bas, A.O., 2009. Quantitative relationships between key
Control, use and ownership of big data: a reciprocal view of customer big data value performance indicators for supporting decision-making processes. Comput. Ind. 60,
in the hospitality and tourism industry. Tour. Manag. 80, 104106. 104–113.
Lund, B.D., Wang, T., Mannuru, N.R., Nie, B., Shimray, S., Wang, Z., 2023. ChatGPT and Saeed, W., Omlin, C., 2023. Explainable AI (XAI): a systematic meta-survey of current
a new academic reality: artificial intelligence-written research papers and the ethics challenges and future opportunities. Knowl. Based Syst. 263, 110273.
of the large language models in scholarly publishing. J. Assoc. Inf. Sci. Technol. 74, Sáez, C., Romero, N., Conejero, J.A., García-Gómez, J.M., 2021. Potential limitations in
570–581. COVID-19 machine learning due to data source variability: a case study in the
Mahbooba, B., Timilsina, M., Sahal, R., Serrano, M., 2021. Explainable artificial nCov2019 dataset. J. Am. Med. Inf. Assoc. 28, 360–364.
intelligence (XAI) to enhance trust management in intrusion detection systems using Saura, J.R., Ribeiro-Soriano, D., Palacios-Marqués, D., 2022. Evaluating security and
decision tree model. Complexity 2021, 6634811. privacy issues of social networks based information systems in Industry 4.0. Enterp.
Malloy, T., Gonzalez, C., 2024. Applying generative artificial intelligence to cognitive Inf. Syst. 16, 1694–1710.
models of decision making. Front. Psychol. 15, 1387948. Schneider, C.M., Moreira, A.A., Andrade Jr, J.S., Havlin, S., Herrmann, H.J., 2011.
McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E., 2006. A proposal for the Mitigation of malicious attacks on networks. Proc. Natl. Acad. Sci. 108, 3838–3841.
dartmouth summer research project on artificial intelligence, august 31, 1955. AI R. Schwartz, R. Schwartz, A. Vassilev, K. Greene, L. Perine, A. Burt, P. Hall, towards a
Mag. 27, 12-12. standard for identifying and managing bias in artificial intelligence, US Department
Merkelbach, S., Von Enzberg, S., Kühn, A., Dumitrescu, R., 2022. Towards a Process of Commerce, National Institute of Standards and Technology 2022.
Model to Enable Domain Experts to Become Citizen Data Scientists for Industrial Shin, D., Koerber, A., Lim, J.S., 2024. Impact of miSinformation from Generative Ai on
Applications. 2022 IEEE 5th International Conference on Industrial Cyber-Physical User Information Processing: How People Understand Misinformation from
Systems (ICPS). IEEE, pp. 1–6. Generative AI. New Media & Society, 14614448241234040.
Meske, C., Bunde, E., Schneider, J., Gersch, M., 2022. Explainable artificial intelligence: So, C., 2020. Understanding the prediction mechanism of sentiments by XAI
objectives, stakeholders, and future research opportunities. Inf. Syst. Manag. 39, visualization. Proc. 4th Int. Conf. Nat. Lang. Process. Inf. Retr. 75–80.
53–63. Song, I.Y., Zhu, Y., 2016. Big data and data science: what should we teach? Expert Syst.
Miao, H., Guo, X., Yuan, F., 2021. Research on identification of potential directions of 33, 364–373.
artificial intelligence industry from the perspective of weak signal. IEEE Trans. Eng. Srinivasan, R., Uchino, K., 2021. Biases in generative art: a causal look from the lens of
Manag. art history. Proc. 2021 ACM Conf. Fairness Account. Transpar. 41–51.
Mondal, S., Das, S., Vrana, V.G., 2023. How to bell the cat? A theoretical review of Stanula, P., Ziegenbein, A., Metternich, J., 2018. Machine learning algorithms in
generative artificial intelligence towards digital disruption in all walks of life. production: a guideline for efficient data source selection. Procedia CIRP 78,
Technologies 11, 44. 261–266.
Monteith, S., Glenn, T., Geddes, J.R., Whybrow, P.C., Achtyes, E., Bauer, M., 2024. Statista, Global Big Data Analytics Market Size 2021-2029, 2022.
Artificial intelligence and increasing misinformation. Br. J. Psychiatry 224, 33–35. Statista, 2023b. Number Data Sci. Employ. Co. Worldw. 2020 2021.
Mudarova, R., Namiot, D., 2024. Countering Prompt Injection attacks on large language Statista, Generative AI - Worldwide, 2023a.
models. Int. J. Open Inf. Technol. 12, 39–48. Steinhardt, J., Koh, P.W.W., Liang, P.S., 2017. Certified defenses for data poisoning
Mullarkey, M.T., Hevner, A.R., Grandon Gill, T., Dutta, K., 2019. Citizen data scientist: a attacks. Adv. Neural Inf. Process. Syst. 30.
design science research method for the conduct of data science projects. Extending Stöger, K., Schneeberger, D., Kieseberg, P., Holzinger, A., 2021. Legal aspects of data
the Boundaries of Design Science Theory and Practice: 14th International Conference cleansing in medical AI. Comput. Law Secur. Rev. 42, 105587.
on Design Science Research in Information Systems and Technology, DESRIST 2019, Stylos, N., Zwiegelaar, J., Buhalis, D., 2021. Big data empowered agility for dynamic,
Worcester, MA, USA, June 4–6, 2019, Proceedings 14. Springer, pp. 191–205. volatile, and time-sensitive service industries: the case of tourism sector. Int. J.
Nemani, P., Joel, Y.D., Vijay, P., Liza, F.F., 2023. Gender bias in transformers: a Contemp. Hosp. Manag. 33, 1015–1036.
comprehensive review of detection and mitigation strategies. Nat. Lang. Process. J., Suhail, S., Iqbal, M., Hussain, R., Jurdak, R., 2023. ENIGMA: An explainable digital twin
100047 security solution for cyber–physical systems. Comput. Ind. 151, 103961.
13
R.A. Abumalloh et al. Computers in Industry 161 (2024) 104128
Tchuente, D., Lonlac, J., Kamsu-Foguem, B., 2024. A methodological and theoretical Whang, S.E., Roh, Y., Song, H., Lee, J.-G., 2023. Data collection and quality challenges in
framework for implementing explainable artificial intelligence (XAI) in business deep learning: a data-centric ai perspective. VLDB J. 32, 791–813.
applications. Comput. Ind. 155, 104044. Wise, A.F., 2022. Educating data scientists and data literate citizens for a new generation
Tortora, L., 2024. Beyond discrimination: generative AI applications and ethical of data. Situating Data Science. Routledge, pp. 165–181.
challenges in forensic psychiatry. Front. Psychiatry 15, 1346059. Wong, I.A., Lian, Q.L., Sun, D., 2023. Autonomous travel decision-making: an early
Turn, Ware, 1976. Privacy and security issues in information systems. IEEE Trans. glimpse into ChatGPT and generative AI. J. Hosp. Tour. Manag. 56, 253–263.
Comput. 100, 1353–1361. Xu, L., Sanders, L., Li, K., Chow, J.C., 2021. Chatbot for health care and oncology
Vassakis, K., Petrakis, E., Kopanakis, I., 2018. Big Data Anal. Appl. Prospects Chall. Mob. applications using artificial intelligence and machine learning: systematic review.
Big Data A Roadmap Models Technol. 3–20. JMIR Cancer 7, e27850.
Venkatesh, V., Davis, F.D., 1996. A model of the antecedents of perceived ease of use: Yadegaridehkordi, E., Nilashi, M., Shuib, L., Nasir, M.H., Asadi, S., Samad, S., Awang, N.
development and test. Decis. Sci. 27, 451–481. F., 2020. The impact of big data on firm performance in hotel industry. Electron.
Villanueva Zacarias, A.G., Reimann, P., Weber, C., Mitschang, B., 2023. AssistML: an Commer. Res. Appl. 40, 100921.
approach to manage, recommend and reuse ML solutions. Int. J. Data Sci. Anal. Yallop, A., Seraphin, H., 2020. Big data and analytics in tourism and hospitality:
1–25. opportunities and risks. J. Tour. Futures 6, 257–262.
Wamba, S.F., Queiroz, M.M., Jabbour, C.J.C., Shi, C.V., 2023a. Are both generative AI Yoon, S.-J., Kim, M.-Y., 2022. A study on deriving improvements through user
and ChatGPT game changers for 21st-Century operations and supply chain recognition analysis of artificial intelligence speakers. Appl. Sci. 12, 9651.
excellence? Int. J. Prod. Econ., 109015 Zacarias, A.G.V., Weber, C., Reimann, P., Mitschang, B., 2021. AssistML: A Concept to
Wamba, S.F., Queiroz, M.M., Jabbour, C.J.C., Shi, C.V., 2023b. Are both generative AI Recommend ML Solutions for Predictive Use Cases. 2021 IEEE 8th International
and ChatGPT game changers for 21st-Century operations and supply chain Conference on Data Science and Advanced Analytics (DSAA). IEEE, pp. 1–12.
excellence? Int. J. Prod. Econ. 265, 109015. A. Zhang, L. Sheng, Y. Chen, H. Li, Y. Deng, X. Wang, T.-S. Chua, On Generative Agents in
Wang, X.S., Ryoo, J.H.J., Bendle, N., Kopalle, P.K., 2021. The role of machine learning Recommendation, arXiv preprint arXiv:2310.10108, (2023).
analytics and metrics in retailing research. J. Retail. 97, 658–675. Zhang, H., Zang, Z., Zhu, H., Uddin, M.I., Amin, M.A., 2022. Big data-assisted social
Whalen, J., Mouza, C., 2023. ChatGPT: Challenges, opportunities, and implications for media analytics for business model for business decision making system competitive
teacher education. Contemp. Issues Technol. Teach. Educ. 23, 1–23. analysis. Inf. Process. Manag. 59, 102762.
14