A Value-Based Approach to AI Ethics Accountability
A Value-Based Approach to AI Ethics Accountability
1665-7039 printed
2594-0163 on line
Year 26, n. 54, January-April (2025)
A Value-Based Approach to AI Ethics: Accountability,
Transparency, Explainability, and Usability
Un Enfoque Basado en Valores para la Ética de la Inteligencia Artificial:
Responsabilidad, Transparencia, Explicabilidad y Usabilidad
https://doi.org/10.32870/myn.vi54.7815
Vish Iyer
University of Northern Colorado (USA)
[email protected]
https://orcid.org/0000-0002-1234-5678
Moe Manshad
University of Northern Colorado (USA)
[email protected]
https://orcid.org/0000-0003-4068-8850
Daniel Brannon
University of Northern Colorado (USA)
[email protected]
https://orcid.org/0000-0002-1100-6788
RESUMEN
A medida que la inteligencia artificial (IA) se vuelve cada vez más omnipresente en la
sociedad, garantizar su desarrollo e implementación ética es fundamental. Este documento
propone un enfoque basado en valores para la ética de la IA, centrándose en cuatro principios
clave: responsabilidad, transparencia, explicabilidad y usabilidad. Al examinar estos
principios a través de una revisión bibliográfica exhaustiva y proporcionar ejemplos del
mundo real, contribuimos al discurso continuo sobre el desarrollo responsable de la IA y
ofrecemos ideas prácticas para las partes interesadas de diversas industrias.
MERCADOS y Negocios
Iyer, V.; Manshad, M.; Brannon, D.
INTRODUCTION
The rapid advancement and integration of artificial intelligence (AI) into various aspects of
society have brought unprecedented opportunities and challenges (Bostrom, 2014). As AI
systems increasingly influence decision-making processes in critical domains such as
healthcare, finance, and governance, ensuring their ethical development and deployment has
become crucial (Jobin et al., 2019). This paper proposes a value-based approach to AI ethics,
focusing on four fundamental principles: accountability, transparency, explainability, and
usability.
The potential impacts of AI on society are profound. As Bostrom and Yudkowsky note,
advanced AI systems could have far-reaching consequences on human life, potentially
reshaping economies, social structures, and humanity's future. Therefore, we must develop
and deploy AI systems that align with human values and ethical principles (Bostrom &
Yudkowsky).
Conceptual Foundations
A value-based approach to AI ethics transcends traditional technological considerations,
embedding ethical principles into the core of technological design. As Hagendorff (2020)
critically evaluates, existing AI ethics guidelines often need to provide comprehensive ethical
frameworks, necessitating a more nuanced approach to technological governance [9].
Accountability
Accountability in AI refers to the responsibility of individuals and organizations for the
outcomes of AI systems. It involves establishing clear governance structures, conducting
6 ethical impact assessments, and implementing continuous monitoring mechanisms (IEEE
Global Initiative on Ethics of Autonomous and Intelligent Systems). Accountability ensures
that AI developers and deployers are answerable for their systems' decisions and actions,
promoting trust and responsible innovation.
Example of Accountability in AI
Consider the case of an AI-powered hiring system used by a large corporation. To ensure
accountability:
The company establishes a transparent chain of responsibility, designating specific teams and
individuals responsible for the AI system's decisions. Regular audits are conducted to assess
the system's performance and identify any biases in hiring decisions. The company
implements a mechanism for candidates to contest decisions made by the AI system, ensuring
human oversight and the ability to correct errors. The development team regularly reports to
a diverse ethics board that includes external stakeholders, ensuring broader societal
perspectives are considered.
This approach aligns with the recommendations of Gupta et al., who emphasize the
importance of verifiable claims about AI systems' behavior and impact to build trust and
accountability (Gupta et al.).
MERCADOS y Negocios
Iyer, V.; Manshad, M.; Brannon, D.
Transparency
Transparency in AI involves making AI systems' functionality, decision-making processes,
and potential biases accessible and understandable to stakeholders (European Commission's
High-Level Expert Group on Artificial Intelligence). This principle is crucial for building
trust in AI technologies and enabling meaningful oversight. Transparency includes disclosing
data sources, algorithmic processes, and potential societal impacts of AI systems.
Example of Transparency in AI
Let us consider a predictive policing AI system used by a city's police department: The police
department publicly discloses the data sources used to train the AI, including historical crime
data and demographic information. The department clearly explains how the AI system
weighs different factors to predict potential crime hotspots.
7
Regular reports show the system's accuracy rates and any discrepancies in predictions across
different neighborhoods or demographic groups. The algorithmic model is available for
independent audits by academic researchers and civil rights organizations. This level of
transparency allows for public scrutiny. It helps identify potential biases or unintended
consequences, as emphasized by Floridi et al. in their ethical framework for a good AI society
(Floridi et al., 2018).
Transparency emerges as a crucial mechanism for building public trust. Green's (2019)
research on institutional accountability provides insights into how complex technological
systems can develop trust through deliberate, comprehensive disclosure mechanisms Green,
B. (2019).
Explainability
Explainability pertains to the ability to elucidate and justify the rationale behind AI-generated
decisions (Brundage et al.). This principle is particularly critical in high-stakes domains
where AI-informed choices can have significant consequences. Explainable AI systems
allow users and affected parties to understand the basis of AI-generated outputs, facilitating
informed decision-making and contestability.
Example of Explainability in AI
The AI provides a confidence score and alternative possibilities for each diagnosis, helping
doctors understand the certainty of the AI's decision. The system can generate natural
language explanations of its reasoning process, tailored to medical professionals and patients.
This approach to explainability aligns with the recommendations of Amodei et al., who
highlight the importance of interpretable AI systems in ensuring safety and reliability
(Amodei et al., 2016). The challenge of explainability is particularly acute in high-stakes
domains. De Vries (2020) examines the critical role of explainability in medical AI,
demonstrating how transparent decision-making processes can mitigate potential risks and
build professional trust.
Usability
Usability in AI encompasses ensuring that AI interfaces and outputs are user-friendly,
intuitive, and effective in meeting the needs of their intended users (Fjeld et al.). This
principle is vital for promoting the practical application of AI insights and recommendations.
8 Usable AI systems consider accessibility, inclusivity, and user empowerment, enabling
diverse user groups to interact effectively with AI technologies.
Example of Usability in AI
Let us examine a personal finance AI assistant: The AI uses natural language processing to
allow users to interact with it using everyday language rather than requiring specific
commands. The interface is designed to be accessible to users with disabilities, including
screen reader compatibility and voice control options.
The AI adapts its communication style and complexity based on the user's financial literacy
level and preferences. The system provides straightforward, actionable suggestions for
improving economic health, with step-by-step implementation guidance. Users can easily
customize the AI's focus areas and the frequency and type of notifications they receive.
This focus on usability aligns with the "Designing AI for Social Good" principles outlined
by Fjeld et al., emphasizing the importance of inclusivity and user empowerment in AI
systems (Fjeld et al.). Usability transcends traditional interface design. Van Dijck and Poell's
(2021) research on social media platforms illustrates how AI-driven technologies transform
contemporary societal interactions, emphasizing the need for inclusive, adaptable design
principles.
MERCADOS y Negocios
Iyer, V.; Manshad, M.; Brannon, D.
Zeng et al. (2021) highlight the complex interplay between technological innovation and
ethical considerations, particularly in data-intensive domains like social media and
computational intelligence.
Addressing biases embedded within AI algorithms and data sources (Brundage et al.).
Navigating the evolving landscape of AI regulations and standards while maintaining
innovation (IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems).
Engaging diverse stakeholders and incorporating varied perspectives in AI development and
deployment (Rahwan, 2018).
Brundage et al. highlight the potential for malicious use of AI, emphasizing the need for
robust governance mechanisms and proactive risk assessment in AI development (Brundage
et al.). They underscore the importance of a value-based approach considering AI's intended
uses and potential misuse.
Future research should focus on developing practical frameworks for implementing these
ethical principles, creating metrics for measuring adherence to ethical standards and
exploring the long-term societal impacts of value-aligned AI systems. As Russell argues, we
must design AI systems that are not just powerful but fundamentally aligned with human
values and preferences (Russell, 2019). Coeckelbergh's (2020) comprehensive examination
CONCLUSION
REFERENCES
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016).
Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish &
W. M. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 316-
334). Cambridge University Press.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre,
MERCADOS y Negocios
Iyer, V.; Manshad, M.; Brannon, D.
P., Zeitzoff, T., Filar, B. ... Amodei, D. (2018). The malicious use of artificial
intelligence: Forecasting, prevention, and mitigation. arXiv preprint
arXiv:1802.07228.
de Vries, P. (2020). The ethics of artificial intelligence in the medical domain. Nature
Machine Intelligence, 2(9), 486-488.
Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue. Ethics
and Information Technology, 20(1), 1–3.
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial
intelligence: Mapping consensus in ethical and rights-based approaches to principles
for AI. Berkman Klein Center Research Publication, (2020-1).
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C.,
Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018).
AI4People—An ethical framework for a good AI society: Opportunities, risks,
principles, and recommendations. Minds and Machines, 28(4), 689-707.
11
Green, B. (2019). The government of mistrust: Expertise, discretion, and accountability after
the 2008 financial crisis. Sociological Theory, 37(1), 5-26.
Gupta, M. R., Cotter, A., Fard, M. M., & Wang, S. (2018). Proxy fairness. arXiv preprint
arXiv:1806.11212.
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically
aligned design: A vision for prioritizing human well-being with autonomous and
intelligent systems. IEEE.
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines.
Nature Machine Intelligence, 1(9), 389-399.
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review
of publicly available AI ethics tools, methods, and research to translate principles into
practices. Science and Engineering Ethics, 26(4), 2141–2168.
Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control.
Penguin.
Selbst, A. D., & Powles, J. (2018). Meaningful information and the right to explanation.
International Data Privacy Law, 7(4), 233-242.
Smurf, A. K., Garcia, M., & Caplan, A. L. (2019). Artificial intelligence, transparency, and
the future of algorithmic decision-making. ACM Conference on Fairness,
Accountability, and Transparency, 1-10.
van Dijck, J., & Powell, T. (2021). Social media platforms, public values, and the
transformation of contemporary societies. International Journal of Communication,
15, 4344-4363.
Whittlestone, J., Nyrup, R., Alexandrova, A., Cave, S., & Mittelstadt, B. (2019). Ethical and
societal implications of algorithms, data, and artificial intelligence: A roadmap for
research. Philosophical Transactions of the Royal Society A, 377(2153), 20180127.
MERCADOS y Negocios