Week 9 Session 2 Lesson Plan
Week 9 Session 2 Lesson Plan
of the "Prompt Engineering Specialization" course. This session falls under **Part 2: Intermediate
Techniques and Applications**, focusing on ethics and bias in AI prompting. The plan is designed for a
90-minute session, assuming a mix of lecture, discussion, and hands-on exercises tailored to beginner-to-
advanced learners.
---
### Lesson Plan: Week 9, Session 18 - Mitigating Bias Through Prompt Design
**Date**: Assumed to align with course timeline (e.g., mid-April 2025, based on a February 25, 2025
start date)
**Duration**: 90 minutes
**Level**: Intermediate
**Objective**: Equip students with practical techniques to identify and reduce bias in AI-generated
outputs through thoughtful prompt design, fostering ethical AI use.
---
2. Apply specific prompt engineering techniques to mitigate bias (e.g., neutral language, explicit
instructions).
---
- Access to an AI platform (e.g., ChatGPT, Grok, or a sandbox environment provided by the course).
---
- **Activities**:
- Quick greeting and overview of Session 18’s focus: moving from understanding bias (Session 17) to
actionable mitigation.
- Pose a question to engage: “Have you noticed any biased outputs in your own prompt experiments so
far?”
- **Transition**: “Today, we’ll learn how to take control of that through prompt design.”
- **Content**:
- Brief explanation: Model biases (e.g., gender, race) can be worsened or countered by prompt
phrasing.
- Example: “Write a description of a typical software engineer” → Stereotypical output vs. “Write a
description of a software engineer, avoiding stereotypes.”
- **Counterbalancing**: Ask for multiple perspectives (e.g., “List pros and cons from diverse
viewpoints”).
- **Constraints**: Limit outputs to factual or neutral content (e.g., “Avoid assumptions about gender
or culture”).
- **Visuals**: Show before-and-after prompt examples with AI outputs (e.g., biased job ad vs. neutral
job ad).
- **Engagement**: Pause at 0:20 to ask, “Which of these techniques do you think would be hardest to
implement consistently? Why?”
- **Activity**:
- Revised Prompt: “Write a description of a CEO, ensuring diversity in gender and leadership style,
avoiding stereotypes.”
- Expected Output: More balanced (e.g., “A leader of any gender, with a range of styles from
collaborative to decisive…”).
- Explain changes: Added explicit diversity instruction and removed implicit assumptions.
- **Discussion**: “What differences did you notice? How could we refine this further?”
- **Setup**:
- Distribute handout: “Prompt Bias Mitigation Checklist” (e.g., “Is the language neutral? Are
assumptions avoided?”).
3. “Generate a list of traits for a successful entrepreneur.” (Risk: Socioeconomic or cultural bias)
- **Instructions**:
- Step 1 (10 min): Test each prompt on an AI platform, note biased outputs.
- Step 2 (15 min): Rewrite each prompt using at least two mitigation techniques from the lecture.
- **Support**: Instructor circulates to assist, answer questions, and highlight creative solutions.
- **Activities**:
- Volunteers (2-3 pairs) share their original vs. revised prompts and outputs (10 min).
- Example Qs: “What bias did you spot? How did your changes affect the output?”
- **Instructor Input**: Highlight standout examples, connect to real-world implications (e.g., fair AI in
hiring).
- **Activities**:
- Recap: “We’ve seen how prompts can shape fairness. Next session, you’ll start your midterm projects
—think about ethics as you plan!”
- Homework:
- Task: Test one prompt from your own work (past exercises or personal projects), identify bias, and
revise it. Write a 1-paragraph reflection on the process.
---
### Assessment
---
- **If time runs short**: Skip volunteer sharing, summarize key examples, and move reflection to
homework.
- **If tech fails**: Use pre-prepared outputs (screenshots) for demo and exercise; focus on manual
prompt rewriting.
---
- Encourage experimentation—some students may overcorrect (e.g., overly vague prompts). Guide them
toward balance.
- Be ready for diverse student experiences with bias (e.g., cultural perspectives); foster inclusive
discussion.
- Tie back to tools like Grok or ChatGPT if students ask about model-specific quirks.
---
What do you think? Would you like me to adjust anything—say, more time for the exercise, a different
demo, or additional techniques? I can refine it further based on your feedback!