AI and Privacy Concerns in Data Security
AI and Privacy Concerns in Data Security
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
AI and Privacy Concerns in Data Security
Abstract
The study of AI and privacy concerns for the protection of data explores the interaction of AI and privacy,
dealing with the theoretical and practical dimensions of the critical issue. It also aims to evaluate the
privacy concerns of AI and its implications for data security. The theoretical framework explores privacy
concerns in the digital age, with ethical theories revealing the emerging digital world increases the
concerns of privacy with a lot of benefits. The core of the paper provides an understanding of specific
privacy concerns associated with AI and examines regulatory frameworks like GDPR, and HIPAA for the
protection of sensitive data. The implications of AI privacy focus on the broader impacts on privacy and
ethical responsibility and work for exploring different ways for the protection of data. Detailed case
studies provide real-world context of the security of the AI representing practical challenges that are faced
by the organization including data breaches and loss of sensitive information. Moreover, case studies
further provide direction for the development of the privacy policy that is employed by the government
with strict rules of privacy act. Concerns of AI for the data security and privacy of the AI are complicated
including the collection of and security of data as well as the findings dealing with the concerns of AI is
significant to ensure the privacy of organizations and people. Different Regulatory frameworks like the
Privacy Act 2018, GDPR, and OECD principles are employed for the protection of privacy and have
made strict rules for the protection of the privacy and data security of organizations and individuals.
Keywords: Artificial Intelligence (AI), Privacy Concerns, Data Protection, Ethical Implications,
Regulatory Challenges, Surveillance
65
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
1. Introduction
1.1 Background and Significance
Artificial intelligence is a simple subfield of computer science and performs tasks that are generally done
by humans like visual and audio perception, learning, and techniques of AI have been established in our
daily lives. Privacy concerns about the security of data are increasing as technologies like generative AI
become more common in our daily lives. Business leaders need to recognize the significant benefits of AI
while also being cautious about the privacy and ethical challenges that come with it [1]. The study
conducted by Manikonda and his fellows indicated that AI has allowed individuals to depend fully on the
automated system for the solution of different issues and discuss the concerns of people about the privacy
policy representing that significant people are concerned about the privacy policy. Initially, the people
showed they had no concern about privacy but when they learned about the always-listening features of
the devices, their concern increased their privacy and data security [2].
The fast growth of AI technologies has surpassed the establishment of strong privacy laws, resulting in
inadequate protection for the personal information of people. Additionally, AI can assess and extract
sensitive information from harmless-looking data, leading to increased risks of privacy breaches.
Functions of the AI mostly depend on the voluntary and involuntary sharing of the personal information
of users [3].
Significance
This study evaluates the applications of AI identifies the potential risks and develops strategies to mitigate
the risks associated with privacy concerns of data security. The study provides a better understanding of
the policymaker's regulations and balances innovations of AI with privacy concerns. This study educates
individuals about the privacy concerns of AI and helps them to make informed decisions about their data
security. Moreover, this study provides recommendations for organizations to keep their data safe. By
exploring the intervention of AI and privacy, this study can help to create a safer and more transparent
ecosystem of AI, protecting the rights of individuals as well as promoting responsible AI development.
66
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
organizations for the creation of effective regulations. By analyzing and evaluating the intersection of AI
this study contributes to the safe development of AI and data security.
1. To identify and analyze the privacy risks associated with the applications of AI for the
security of data.
2. Evaluate the effectiveness of the existing guidance and regulations by reviewing the law of
privacy.
3. To explore the privacy-preserving techniques of AI for data security.
4. To develop recommendations for the developers of AI, policymakers, and individuals for the
protection of personal data.
5. To contribute to the development of the privacy-protect eco-system of AI for data security.
By accomplishing these objectives, the study aims to contribute to the development of a reliable
ecosystem of AI.
Research Questions
A. What are the privacy risks associated with the AI application for data security and how do
they impact the personal data of individuals?
B. How are the current regulations and policies effective to mitigate the risks of privacy?
C. How can the AI developers and policymakers work together for the creation of a respectful
ecosystem of AI for data security?
D. What are the implications of the policy related to AI and data security?
1.4 Structure of the Paper
This study provides the theoretical framework of AI and privacy concerns about the data security of the
digital era through the collection of data and data processing evaluating data breaches. The regulatory and
legal framework provides global data protection with national laws and standards as well as evaluates the
relevant case studies for a better understanding of the privacy concerns of AI. Moreover, this study
evaluates the impacts of AI concerns on individuals and businesses. By providing the model of AI and
innovations with future recommendations this study helps to overcome the issues of AI and data security.
67
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
2. Theoretical Framework
2.1 Definition of Artificial Intelligence (AI)
The most common way of making PCs with the capacity to think, learn, and spread the word about
choices all alone is man-made consciousness (artificial intelligence). Artificial intelligence (AI) systems
perform previously human-required tasks like speech recognition, forecasting, and complex problem-
solving by utilizing algorithms and massive amounts of data. Be that as it may, there are serious security
issues with simulated intelligence. It is possible to compile and analyze large-scale databases including
frequently sensitive personal data, which raises the possibility of misuse or unauthorized access.
Because AI systems may extract complex personal insights from harmless data, they increase the risk of
privacy breaches. In addition, using AI to monitor and spy on people might violate their privacy rights of
data security. To address these privacy issues and strike a balance between the advancement of technology
and the preservation of personal information, strong data security measures and open AI practices are
essential [5].
68
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
2.2 Privacy in a Digital Age
In the digital age, artificial intelligence (AI) has a significant impact on privacy because it enables the
collection, analysis, and interpretation of massive amounts of personal data systems may find trends and
sensitive information through data mining and machine learning, frequently going beyond what people
first provide. Because of abuse or unapproved admittance to information, individual data might be
compromised, raising protection concerns. Moreover, following and reconnaissance frameworks driven
by simulated intelligence might encroach upon human opportunities and security. To address these
problems and guarantee that privacy rights are valued while utilizing AI's advantages, strict data security
procedures, openness in AI algorithms, and strong regulatory frameworks are needed [6].
69
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
Figure 2Legal and Security considerations for AI concerns and data security [8]
Virtue ethics advocates for technologies that reflect qualities like respect and honesty and ensure privacy
via ethical design. It also focuses on the goals and character of individuals creating and using AI and
protecting the data. Lastly, the Social Contract Theory draws attention to the unspoken agreements that
exist between people and society. It implies that users and developers of AI must protect privacy
standards for data security as a component of a larger social contract that promotes justice and trust in the
digital sphere [9].
Overall, the AI concerns are multifaceted including the collection of data, security of data, and autonomy
of users. Dealing with the concerns is important to ensure that privacy concerns are the priority of the AI
and that AI is responsible for the transparency of data and data security.
71
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
4. Regulatory and Legal Frameworks
To make the development of AI safe and for the protection of data privacy, a regulatory framework exists
including the regulations of global data protection, national laws and standards, specific guidelines, and
ethical codes with case studies that have faced legal challenges.
IBM Watson Health faced a data breach in 2019 that exposed the sensitive data of patients representing
the vulnerabilities in handling and security of health data. A lot of privacy concerns as raised about the
inadequacy of the security measurement for the protection of data. At that time healthcare privacy
regulations like HIPAA were imposed on healthcare for the security of data in the AI-driven system of
healthcare [16].
73
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
To secure individuals' privacy in the context of technological advancements, stringent data protection
laws and ethical AI procedures are essential. This intrusion has the potential to undermine trust and result
in privacy breaches [18].
5.2 Implications for Businesses
When it comes to AI, businesses need to find a balance between inventiveness and privacy concerns.
Regardless of its capability to support creation and increment consumer understanding, man-made
consciousness (computer-based intelligence) presents dangers to information security and administrative
consistency. To defend people's all in all correct to security, organizations should guarantee that their
information assortment and the executives rehearse consent to guidelines like the CCPA and GDPR. An
individual's standing could be offended, client certainty could be harmed, and legitimate issues could
emerge if these issues are overlooked. To reduce risks and preserve corporate integrity, transparent AI
procedures, and efficient data management are essential [19].
Society must control how artificial intelligence affects privacy and data security. Governments,
corporations, and people all need to make sure that AI technologies are created and applied in a way that
upholds moral standards and protects the right to privacy. This entails putting in place robust data security
protocols, making sure people take accountability, and maintaining openness.
To preserve public confidence and encourage the appropriate application of AI to improve social benefits
without jeopardizing people's security, ethical responsibility entails proactively addressing possible
privacy breaches and protecting personal information [21].
74
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
Innovative solutions are being developed to tackle AI and privacy issues by focusing on safeguarding data
and being transparent. Merged learning allows for dispersed data analysis while preservative privacy
through differential privacy and homomorphic encryption. Explicable AI offers an understanding of AI
decision-making procedures, while AI tools such as Privacy Attractive Technologies (PETs) and data
anonymization methods protect personal information. Rules such as GDPR and CCPA guarantee the
responsible development of AI. These solutions strike a balance between AI's capabilities and privacy
requirements, ensuring reliable and secure AI applications. By incorporating privacy features, AI has the
potential to fuel creativity while upholding individual rights and preserving public confidence [22].
Innovation mapping includes big data, cloud computing additive manufacturing cyber security, etc. In
emerging solutions and innovation, we will discuss privacy-preserving AI technologies, enhancing
transparency in AI systems, and improving user control and constant mechanisms.
Emerging AI tools are being developed to protect privacy and discourse data privacy concerns.
Homomorphic Encryption licenses for calculations to be achieved on data that is encrypted, while Secure
Multi-Party Computation permits cooperative learning without revealing the data. Differential Privacy
presents random noise to data to guarantee privacy. United Learning and Decentralized AI preserve data
locally to minimize risks. Additional advancements include Zero-Knowledge Proofs, which authenticate
75
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
information without disclosing it, and Synthetic Data, which produces fake data to train AI algorithms.
These solutions allow for AI analysis while safeguarding confidential data, promoting trust and ethical AI
advancement. They continue stability between imagination and discretion, promising that AI has a
positive influence on society and data security [24].
The photograph of AI systems is improved by new resolutions and innovations applying Reasonable AI
(XAI) methods like Perfect Interpretability, Feature Attribution, and Model-agnostic explanations. These
methods offer an understanding of how AI makes choices, allowing for the detection of biases and
investigation of information use. Additionally, Transparency-by-Design models and Open-Source AI
representations encourage responsibility and reproducibility. Devotion Mechanisms and Imaging aid in
understanding AI models. These progressions boost confidence in AI systems by offering clear
explanations for results made by AI, allowing for the detection of impartialities and faults, and
encouraging ethical AI development for data security [25].
Emerging solutions and innovations improve user regulator and agreement devices in AI and privacy
concerns about the data security determined skills like Individual Data Organization Organizations,
Consent Management Phases, and User-Centric AI. Additionally, standards like GDPR and CCPA
mandate user agreement and control, ensuring answerable AI development. These solutions prioritize user
independence, trust, and agency, aligning AI with separate values and privacy opportunities. Emerging
solutions and innovations in AI governance and privacy concerns include collaborative approaches like
Multi-Stakeholder Governance, Public-Private Partnerships, and Comprehensive AI Design. These
approaches convey together governments, businesses, civil society, and specialists to develop and
implement AI regulations, standards, and best performance. Moreover, innovations like AI Ethics
Frameworks, Accountability Mechanisms, and Transparency Strategies facilitate partnerships and ensure
responsible AI progress. Additionally, initiatives like AI for Social Good and Human-Centered AI
encourage collaborative efforts to report AI's societal impacts. These collaborative approaches foster trust,
ensure diverse perspectives, and effort responsible AI innovation that arranges privacy and human well-
being [26].
76
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
At the connection of AI and privacy, there are critical business concerns encompassing data privacy and
security issues, unfairness and judgment, and lack of user agreement and control. Understanding these
problems is vital for the safe and responsible application of AI in your business for data security.
77
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
Simulation Model
Simulation model is employed for AI and privacy concerns in data security providing a valuable approach
to deal with the privacy concerns. AI privacy model involves creating a virtual environment to test how
AI systems respond to the various data security threats and privacy challenges. By using simulations,
organizations can evaluate the effectiveness of different security measurements and assess the
78
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
vulnerabilities to better protect sensitive data. This process ensures the AI model prioritizes privacy while
upholding presentation and fostering belief in AI applications.
The confusion metrics provide more information about the accuracy and reliability of data and help in the
determination of classification accomplished by the various digits in the model. A type confusion matrix
is a valuable tool for the performance of simulation, and models and compares the values to assess the
accuracy. High accuracy in the AI model is indicated by the confusion matrix reflecting the effectiveness
of the correct prediction of the model. FP are the false positivities leading to unnecessary alerts and
investigations causing alarm fatigue among the users of AI. FN are the False Negativities representing the
missed instances of sensitive data and data breaches. A high number of the FN can be dangerous. The
insight gained from the confusion matrix informs ongoing improvement in the AI model for data security.
The ROC curve represents the true positive rate and the false positive rate and coefficient are suitable for
comparing the results in the simulation model. ROC curve in the outcomes of the AI simulation and
privacy issues for the data security indicates that people are concerned about data security in AI and
simulation model assess privacy performance, accuracy, and robustness.
This approach allows for the proactive detection and reduction of AI privacy issues. Scenarios involving
AI and privacy issues in simulations involve data breaches, inference attacks, data poisoning, and
assessing privacy-preserving methods. Variations exist in the quality of data, complexity of AI,
capabilities of adversaries, privacy mechanisms, and regulatory environments. Scenarios evaluate AI's
capacity to safeguard sensitive data, susceptibility to cyber-attacks, and efficiency of privacy safeguards.
Expectations consider influences like data accuracy, AI model complexity, and opponent inspirations.
These simulations help estimate AI systems' privacy presentation, classify vulnerabilities, and inform
improvements to ensure answerable AI development and deployment. They enable proactive addressing
of privacy concerns, and progress trust in AI presentations. Assuring the validity of an AI privacy concern
is crucial for reliable results
The Facebook-Cambridge Analytica scandal is a famous instance involving AI and privacy issues.
Cambridge Analytica, an inflexible consultancy secure, applied AI algorithms to gather personal data
from frequent Facebook users without their agreement. The incidence led to a growth in the inspection of
AI and data organization proprieties, highlighting the significance of ordering user privacy and agreement
[28]. In 2018, Cambridge Analytica utilized AI technology to generate changed political advertisements
by collecting Facebook user data without authorization. The controversy exposed the weak data-sharing
policies and inadequate user protections of Facebook. Cambridge Analytica's AI system analyzed user
performance, choices, and relations, imperiling the privacy of millions. Facebook faced harsh criticism,
regulatory scrutiny, and a $5 billion acceptable. This led to an increased focus on AI morals within the
domain of social media.
80
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
In 2019, the University of Chicago Medical Midpoint partnered with Google to create a prognostic
analytics tool powered by AI, without obtaining patient agreement or anonymizing data. The device
retrieved patient annals, which had complex data, without proper refuge measures. This controlled a
federal investigation and solution, underlining the significance of converging on patient privacy in AI-
based healthcare improvements [29]. Investigation on AI and privacy in healthcare, finance, and social
media highlights the significance of arranging data security, slides, and user consent. In addition, fostering
a setting of principles and answerability in AI advancement, along with constant monitoring and
development, can help avoid data breaches and misuse, thus preserving trust in AI requests across
dissimilar industries.
In 2017, Equifax, a leading credit reporting company, hurt a minor cyber-attack that exposed the intimate
financial information of 147 million persons and violated data security. Hackers exploited a vulnerability
in Apache Struts, an open-source software used by Equifax's AI-powered credit scoring system. This old
case study highlights the requirement of guiding data protection and ethical AI development in the finance
division to protect private consumer data [30].
9. Future Directions
With the development of the technology of AI, trends in AI and privacy become more important, and key
development includes the emerging use of AI for data analysis and personal services, increasing the
concerns about the personal data that is collected and utilized. The integration of AI in the blockchain
explores the secure and verified transaction of data and deals with the issues of privacy for data security.
Potential reforms in the policy of AI are expected to deal with the emerging challenges in the privacy of
data security and protection of data. Government and regulatory bodies can implement the structure of the
data protection laws, implement the compliances and the standards of the privacy policy, and introduce a
more robust framework for governing the ethical use of AI for data security. These regulations and
policies increased the transparency needs, protocols of consent, and more strong measurements of data
security.
AI is playing an important role in designing the norms of future privacy and influencing the protection
and management of data. AI can help in the creation of more dynamic and personalized privacy settings,
helping users to have control over their data. AI-driven tools can enforce privacy regulations and can
detect potential breaches and misuse of data.
81
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
The future of AI presents both challenges and opportunities. Challenges include managing the balance
between utility and privacy and ensuring strong measurements of security dealing with the ethical
implications of AI-driven tools for decision-making and data security. Embracing the opportunities and
dealing with the challenges AI can benefit society by protecting the privacy rights of individuals and
enhancing data security.
10. Conclusion
It is concluded that artificial intelligence is used in every field and with the development of AI concern
about data security has increased. This study evaluates the existing regulations for the protection of the
data and provides more robust strategies for overcoming the issues of privacy. The theoretical framework
of the study evaluates artificial intelligence and privacy in the digital era. Ethical theories relevant to AI
and privacy describe ethical theories and apply them to AI development and protect privacy standards. AI
concerns are multifaceted including the collection of and security of data as well as the conclusion that
dealing with the concerns of AI is important to ensure the privacy of organizations and people. Different
Regulatory frameworks like the Privacy Act 2018, GDPR, and OECD principles are employed for the
protection of privacy. The case studies of legal challenges represented how regulations are implemented
on data breaches and data violations. Moreover, the study evaluates how AI privacy regulations impact
individuals, organizations, and data security. Emerging tools of AI are developed for the protection of
privacy and data security.
82
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
References
[1] L.-E. C. Ferm, P. Thaichon and S. Quach, Solutions to artificial intelligence (AI) and privacy, London:
Routledge, 2022.
[2] L. Manikonda, A. Deotale and S. Kambhampati, "What's up with Privacy?: User Preferences and
Privacy Concerns in Intelligent Personal Assistants," In Proceedings of the 2018 AAAI/ACM
Conference on AI, Ethics, and Society, vol. 1, no. 1, pp. 229-235, 2018.
[3] Y. Canbay, A. Utku and P. Canbay, "Privacy Concerns and Measures in Metaverse: A Review,"
International conference on information security and cryptography (ISCTURKEY), vol. 1, no. 1, pp.
80-85, 2022.
[5] Manheim, K., &. Kaplan and L., "Artificial Intelligence: Risks to Privacy and Democracy," Yale JL &
Tech, vol. 21, no. 1, p. 106, 2019.
[6] E. K. P. A. Tom, Blazes, M., Pasquale, L. R., Chiang, M. F., Lee, A. Y, &. Force and A. A. I. T., "Protecting
data privacy in the age of AI-enabled ophthalmology," Translational vision science & technology, vol.
9, no. 2, pp. 36-36, 2020.
[7] Elliott, D. and &. S. E, "AI technologies, privacy, and security," Frontiers in Artificial Intelligence, vol.
5, no. 1, p. 826737, 2022.
[8] N. Naik, B. Z. Hameed, D. K. Shetty, D. Swain and V. Patil, "Legal and Ethical Consideration in Artificial
Intelligence in Healthcare: Who Takes Responsibility?," Frontiers in surgery,, vol. 1, no. 1, p.
8623222, 2022.
[9] Zhang, Y., Wu, M., Tian, G. Y., Zhang, G., &. Lu and J., "Ethics and privacy of artificial intelligence:
Understandings from bibliometrics," Knowledge-Based Systems, vol. 222, no. 1, p. 106994., 2021.
[10] J. P. Choi, D.-S. Jeon and B.-C. Kim, "Privacy and personal data collection with information," Journal
of Public Economics, vol. 173, no. 1, pp. 113-124, 2019.
[11] P. Lacroix, "Big Data Privacy and Ethical Challenges," Big Data, Big Challenges: A Healthcare
Perspective, vol. 1, no. 1, pp. 101-111, 2019.
[12] S. Dilmaghani, M. R. Brust, G. Danoy and N. P. J. &. B. P. Cassagnes, "Privacy and Security of Big Data
in AI Systems: A Research and Standards Perspective," IEEE international conference on big data (big
data), vol. 9, no. 1, p. 6, 2019.
[13] M. L. Jones, E. Kaufman and E. Edenberg, "AI and the Ethics of Automating Consent," IEEE Security
83
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
& Privac, vol. 16, no. 3, pp. 64-72, 2018.
[14] A. Jobin, M. Ienca and E. Vayena, "The global landscape of AI ethics guidelines," nature machine
intelligence, vol. 1, no. 9, pp. 389-399, 2019.
[15] A. Hern, "Illustration by James Melaugh of words related to data analytics in a cloud shaped like a
brain," 6 May 2018. [Online]. Available:
https://www.theguardian.com/news/2018/may/06/cambridge-analytica-how-turn-clicks-into-
votes-christopher-wylie.
[16] New Room, "IBM Study Shows Data Breach Costs on the Rise; Financial Impact Felt for Years," 23
July 2019. [Online]. Available: https://newsroom.ibm.com/2019-07-23-IBM-Study-Shows-Data-
Breach-Costs-on-the-Rise-Financial-Impact-Felt-for-Years.
[18] Cheng, Y., &. Jiang and H., "How do AI-driven chatbots impact user experience? Examining
gratifications, perceived privacy risk, satisfaction, loyalty, and continued use," Journal of
Broadcasting & Electronic Media, vol. 64, no. 4, pp. 592-614, 2020.
[19] Du, S., &. Xie and C., "Paradoxes of artificial intelligence in consumer markets: Ethical challenges
and opportunities.," Journal of Business Research, vol. 129, no. 1, pp. 961-974, 2021.
[20] Taeihagh and A., "Governance of artificial intelligence.," Policy and society, vol. 40, no. 2, pp. 37-
157, 2021.
[21] Cheng, L., Varshney, K. R., &. Liu and H., "Socially responsible ai algorithms: Issues, purposes, and
challenges.," Journal of Artificial Intelligence Research, vol. 71, no. 1, pp. 1137-1181, 2021.
[22] Gill, S. S., Xu, M, Ottaviani, P. C., P. B. S. A. R. and S. Uhlig, "AI for next generation computing:
Emerging trends and future directions.," Internet of Things, vol. 19, no. 1, p. 100514, 2022.
[23] Start Us, "Manufacturing Innovation Map Reveals Emerging Technologies & Startups," 11 July 2022.
[Online]. Available: https://www.startus-insights.com/innovators-guide/manufacturing-innovation-
map-reveals-emerging-technologies-startups/.
[25] U. Ehsan, Q. V. Liao, M. Muller, M. Riedl and &. W. J. D. O., "Expanding explainability: Towards social
transparency in ai systems.," In Proceedings of the 2021 CHI conference on human factors in
84
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Corrosion Management ISSN: 1355-5243
(https://corrosion-management.com/)
Volume33, Issue 01–June 2023
SCOPUS
computing systems, vol. 1, no. 1, pp. 1-19, 2021.
[26] M. Benyahya, S. Kechagia and A. Collen, "The Interface of Privacy and Data Security in Automated
City Shuttles: The GDPR Analysis," Applied Science, vol. 12, no. 9, p. 4413, 2022.
[27] P. Sarajcev, A. Kunac and G. Petrović, "Artificial Intelligence Techniques for Power System Transient
Stability Assessment," Energies, vol. 15, no. 2, p. 507, 2022.
[28] N. Confessore, "Cambridge Analytica and Facebook: The Scandal and the Fallout So Far," 4 April
2018. [Online]. Available: https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-
scandal-fallout.html.
[29] M. Wood, "UChicago Medicine collaborates with Google to use machine learning for better health
care," 17 May 2017. [Online]. Available: https://www.uchicagomedicine.org/forefront/research-
and-discoveries-articles/2017/may/uchicago-medicine-collaborates-with-google-to-use-machine-
learning-for-better-health-care.
85
1355-5243/© The Authors. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).