0% found this document useful (0 votes)
2 views

Week 9 Session 2 Lesson Plan

The lesson plan for Week 9, Session 18 of the 'Prompt Engineering Specialization' course focuses on mitigating bias through prompt design, aimed at intermediate learners. Students will learn techniques to identify and reduce bias in AI outputs, engage in hands-on exercises, and participate in discussions to enhance their understanding of ethical AI use. The session includes lectures, demonstrations, and collaborative exercises, culminating in a homework assignment to apply learned concepts to their own work.

Uploaded by

McKay Thein
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Week 9 Session 2 Lesson Plan

The lesson plan for Week 9, Session 18 of the 'Prompt Engineering Specialization' course focuses on mitigating bias through prompt design, aimed at intermediate learners. Students will learn techniques to identify and reduce bias in AI outputs, engage in hands-on exercises, and participate in discussions to enhance their understanding of ethical AI use. The session includes lectures, demonstrations, and collaborative exercises, culminating in a homework assignment to apply learned concepts to their own work.

Uploaded by

McKay Thein
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Below is a detailed lesson plan for **Week 9, Session 18: Mitigating Bias Through Prompt Design**, part

of the "Prompt Engineering Specialization" course. This session falls under **Part 2: Intermediate
Techniques and Applications**, focusing on ethics and bias in AI prompting. The plan is designed for a
90-minute session, assuming a mix of lecture, discussion, and hands-on exercises tailored to beginner-to-
advanced learners.

---

### Lesson Plan: Week 9, Session 18 - Mitigating Bias Through Prompt Design

**Date**: Assumed to align with course timeline (e.g., mid-April 2025, based on a February 25, 2025
start date)

**Duration**: 90 minutes

**Level**: Intermediate

**Prerequisites**: Completion of prior sessions, including Session 17 (Understanding AI Bias and


Fairness)

**Objective**: Equip students with practical techniques to identify and reduce bias in AI-generated
outputs through thoughtful prompt design, fostering ethical AI use.

---

### Session Goals

By the end of this session, students will be able to:

1. Recognize how prompt phrasing influences biased AI outputs.

2. Apply specific prompt engineering techniques to mitigate bias (e.g., neutral language, explicit
instructions).

3. Evaluate and refine prompts to improve fairness and inclusivity.

---

### Materials Needed

- Access to an AI platform (e.g., ChatGPT, Grok, or a sandbox environment provided by the course).

- Slides or visual aids (examples of biased vs. mitigated prompts).


- Handout: "Prompt Bias Mitigation Checklist" (to be distributed digitally or in print).

- Sample datasets or scenarios (e.g., job descriptions, customer service responses).

- Student laptops or devices for exercises.

---

### Lesson Plan Breakdown

#### 0:00–0:05 | Welcome and Recap (5 minutes)

- **Objective**: Set the stage and connect to prior learning.

- **Activities**:

- Quick greeting and overview of Session 18’s focus: moving from understanding bias (Session 17) to
actionable mitigation.

- Recap key points from Session 17:

- AI models reflect biases in training data.

- Prompts can amplify or reduce these biases.

- Pose a question to engage: “Have you noticed any biased outputs in your own prompt experiments so
far?”

- **Transition**: “Today, we’ll learn how to take control of that through prompt design.”

#### 0:05–0:25 | Lecture: Techniques to Reduce Biased Outputs (20 minutes)

- **Objective**: Introduce actionable strategies for bias mitigation.

- **Content**:

1. **Why Prompts Matter in Bias** (5 min)

- Brief explanation: Model biases (e.g., gender, race) can be worsened or countered by prompt
phrasing.

- Example: “Write a description of a typical software engineer” → Stereotypical output vs. “Write a
description of a software engineer, avoiding stereotypes.”

2. **Key Techniques** (15 min)

- **Neutral Language**: Avoid loaded terms (e.g., “aggressive” vs. “assertive”).


- **Explicit Instructions**: Tell the AI to be fair or inclusive (e.g., “Provide a balanced perspective”).

- **Counterbalancing**: Ask for multiple perspectives (e.g., “List pros and cons from diverse
viewpoints”).

- **Constraints**: Limit outputs to factual or neutral content (e.g., “Avoid assumptions about gender
or culture”).

- **Iterative Refinement**: Test outputs and adjust prompts based on results.

- **Visuals**: Show before-and-after prompt examples with AI outputs (e.g., biased job ad vs. neutral
job ad).

- **Engagement**: Pause at 0:20 to ask, “Which of these techniques do you think would be hardest to
implement consistently? Why?”

#### 0:25–0:35 | Demo: Bias Mitigation in Action (10 minutes)

- **Objective**: Model the process of identifying and fixing bias.

- **Activity**:

- Instructor runs a live demo using an AI platform (e.g., Grok).

- Starting Prompt: “Write a description of a typical CEO.”

- Likely Output: Male-centric, authoritative stereotype (e.g., “A confident man in a suit…”).

- Revised Prompt: “Write a description of a CEO, ensuring diversity in gender and leadership style,
avoiding stereotypes.”

- Expected Output: More balanced (e.g., “A leader of any gender, with a range of styles from
collaborative to decisive…”).

- Explain changes: Added explicit diversity instruction and removed implicit assumptions.

- **Discussion**: “What differences did you notice? How could we refine this further?”

#### 0:35–0:65 | Exercise: Rewrite Prompts to Improve Fairness (30 minutes)

- **Objective**: Apply techniques hands-on to build practical skills.

- **Setup**:

- Distribute handout: “Prompt Bias Mitigation Checklist” (e.g., “Is the language neutral? Are
assumptions avoided?”).

- Provide 3 sample prompts with potential bias issues:

1. “Describe a typical nurse.” (Risk: Gender bias)


2. “Write a customer service response to an angry client.” (Risk: Tone bias or cultural assumptions)

3. “Generate a list of traits for a successful entrepreneur.” (Risk: Socioeconomic or cultural bias)

- **Instructions**:

- Work in pairs or solo (student choice).

- Step 1 (10 min): Test each prompt on an AI platform, note biased outputs.

- Step 2 (15 min): Rewrite each prompt using at least two mitigation techniques from the lecture.

- Step 3 (5 min): Test revised prompts and compare results.

- **Support**: Instructor circulates to assist, answer questions, and highlight creative solutions.

#### 0:65–0:85 | Group Discussion and Reflection (20 minutes)

- **Objective**: Share insights and solidify learning.

- **Activities**:

- Volunteers (2-3 pairs) share their original vs. revised prompts and outputs (10 min).

- Example Qs: “What bias did you spot? How did your changes affect the output?”

- Class discussion (10 min):

- “Which technique worked best for you?”

- “What challenges did you face in balancing fairness with specificity?”

- “How might these skills apply to your midterm or capstone projects?”

- **Instructor Input**: Highlight standout examples, connect to real-world implications (e.g., fair AI in
hiring).

#### 0:85–0:90 | Wrap-Up and Homework Assignment (5 minutes)

- **Objective**: Close the session and set up future learning.

- **Activities**:

- Recap: “We’ve seen how prompts can shape fairness. Next session, you’ll start your midterm projects
—think about ethics as you plan!”

- Homework:

- Task: Test one prompt from your own work (past exercises or personal projects), identify bias, and
revise it. Write a 1-paragraph reflection on the process.

- Due: Next session (Session 19).


- Preview: “Session 19 is project ideation—bring your creative hats!”

---

### Assessment

- **Participation (5 points)**: Engagement in discussion and exercise effort.

- **Exercise Output (informal)**: Quality of revised prompts (feedback given in-class).

- **Homework (5 points)**: Completion and thoughtfulness of reflection (graded later).

---

### Contingency Plan

- **If time runs short**: Skip volunteer sharing, summarize key examples, and move reflection to
homework.

- **If tech fails**: Use pre-prepared outputs (screenshots) for demo and exercise; focus on manual
prompt rewriting.

---

### Notes for Instructor

- Encourage experimentation—some students may overcorrect (e.g., overly vague prompts). Guide them
toward balance.

- Be ready for diverse student experiences with bias (e.g., cultural perspectives); foster inclusive
discussion.

- Tie back to tools like Grok or ChatGPT if students ask about model-specific quirks.

---

What do you think? Would you like me to adjust anything—say, more time for the exercise, a different
demo, or additional techniques? I can refine it further based on your feedback!

You might also like