Unit8_Ethics
Unit8_Ethics
Artificial Intelligence (AI) is rapidly transforming society, offering revolutionary benefits but also raising significant
ethical questions. Ethics in AI ensures that technologies are developed and used responsibly, aligning with human
values and societal norms.
Ethics refers to moral principles that govern behavior, focusing on concepts such as fairness, justice, accountability,
and respect for human rights. In AI, ethics guide the responsible development and use of technologies to avoid issues
like bias, transparency, accountability, and privacy violations.
AI Ethics involves the principles that ensure AI is fair, transparent, accountable, and aligned with human values,
promoting the responsible deployment of AI systems.
1. Explainability: AI systems must be transparent, allowing users to understand how decisions are made. This
fosters trust, accountability, and ethical use of AI.
2. Fairness: Efforts must be made to remove bias and discrimination from AI models. Fairness ensures that
decisions made by AI do not unfairly disadvantage certain groups.
3. Robustness: AI systems must be reliable, consistently delivering accurate results under various conditions and
over extended periods.
4. Transparency: Open disclosure of AI design, operation, and decision-making processes promotes
accountability, allowing stakeholders to assess ethical impacts and societal consequences.
5. Privacy: Individuals must have control over their personal information and be free from unnecessary intrusion,
ensuring autonomy and dignity.
BIAS IN AI
Bias in AI occurs when algorithms make decisions that reflect unfair assumptions or societal inequalities. Bias can
result from flawed data, algorithmic design, or cognitive biases embedded by developers.
Bias Awareness refers to recognizing and addressing unfair preferences in AI systems. It is critical for avoiding
discrimination and ensuring that AI serves all groups equitably.
Sources of Bias
1. Training Data Bias: Bias arises when training datasets over- or under-represent certain groups, leading to
inaccurate or discriminatory AI outcomes.
2. Algorithmic Bias: Flawed algorithms may perpetuate bias from training data or through errors in decision-
making models.
3. Cognitive Bias: Developers' biases can be unintentionally embedded in AI systems through data selection and
algorithm design.
EXAMPLES OF AI BIAS IN REAL LIFE
1. Healthcare: AI in medical diagnosis may underperform for underrepresented groups (e.g., Black patients
receiving less accurate results).
2. Online Advertising: Gender bias in job role ads, with higher-paying roles shown more to men than women.
3. Image Generation: AI generating biased images, such as depicting older professionals as male, reinforcing
gender stereotypes.
To address AI bias:
DEVELOPING AI POLICIES
Clear AI policies are essential for responsible use. Policies should ensure that AI systems are ethical, transparent, and
respect human rights.
1. IBM AI Ethics Board: Develops principles for ethical AI, focusing on fairness, transparency, and bias mitigation.
2. Microsoft’s Responsible AI: Provides guidelines for responsible AI development, emphasizing fairness, reliability, and
privacy.
3. Google’s AI Governance: Focuses on AI that prioritizes human values, safety, and transparency.
4. European Union’s Guidelines: Develops ethical AI principles, including fairness, accountability, and human oversight.
The Moral Machine is an interactive platform by MIT that presents ethical dilemmas in AI, such as decisions self-
driving cars must make in life-or-death scenarios. These dilemmas often force users to choose between conflicting
moral principles (e.g., saving passengers vs. pedestrians).
It serves as a tool to raise awareness of the ethical complexities of AI and encourages reflection on the moral
implications of AI technologies.
Chapter End Q & A
B. True/False
1. Understanding the fundamental principles of ethics is crucial to applying ethical considerations in the field of
artificial intelligence.
2. The ability to critically analyze the ethical implications of AI decision-making processes requires a deep
understanding of their impact on individuals and society.
3. Investigating various types of bias in AI systems enables students to understand their ethical implications.
4. Bias in AI systems can lead to unfair and discriminatory outcomes, making it essential to address issues of bias,
fairness, and equity.
5. In the context of AI, transparency is important for making the decision-making processes of AI systems clear
and understandable to users.
F. Ethical Dilemma
Discussion Question:
Lawmakers, business leaders, and the public must collaborate to establish clear ethical guidelines and regulations for
AI development and use. By promoting transparency, accountability, and inclusivity in AI design, they can ensure that
AI technologies prioritize human rights and societal well-being. Public education and open discussions are crucial to
ensuring AI is created and implemented ethically.