Week 11 Session 1 Lesson Plan
Week 11 Session 1 Lesson Plan
first
session of **Part 3: Advanced Prompt Engineering and Capstone** in the "Prompt Engineering
Specialization" course. This session introduces advanced topics in trustworthy AI, focusing on techniques
to ensure generative AI outputs are reliable and safe. The plan is designed for a 90-minute session,
blending lecture, demonstration, and hands-on exercises for intermediate-to-advanced learners.
---
### Lesson Plan: Week 11, Session 21 - Ensuring Reliable and Safe Outputs
**Date**: Assumed to align with course timeline (e.g., early May 2025, based on a February 25, 2025
start date)
**Duration**: 90 minutes
**Level**: Advanced
**Objective**: Equip students with strategies to validate and enhance the reliability and safety of AI-
generated outputs, preparing them for complex workflows and capstone projects.
---
2. Apply prompt engineering techniques to validate outputs and reduce errors or harmful content.
---
---
- **Activities**:
- Greeting: “Welcome to Week 11—we’re now in Part 3, where we level up to advanced prompt
engineering!”
- Quick recap: “You’ve built solid foundations and intermediate systems; now we focus on
trustworthiness and complexity.”
- Session preview: “Today, we’ll tackle how to make AI outputs reliable and safe—crucial for real-world
use.”
- **Objective**: Explain concepts and methods for ensuring reliable, safe outputs.
- **Content**:
- Risks of untrustworthy AI: Misinformation, inconsistency, harmful outputs (e.g., biased advice,
unsafe instructions).
- **Explicit Constraints**: Instruct AI to avoid errors/harm (e.g., “Only provide factual data”).
- **Validation Prompts**: Ask AI to double-check itself (e.g., “Explain your reasoning” or “Verify this
answer”).
- **Redundancy**: Cross-check outputs with multiple prompts (e.g., rephrase and compare).
- **Safety Filters**: Prevent harmful content (e.g., “Do not generate violent or offensive text”).
- **Engagement**: Pause at 0:20 to ask, “Where have you seen unreliable AI outputs in your projects so
far?”
- **Activity**:
- **Revised Prompt**: “Create a hiking plan for a beginner, including safety tips, weather checks, and
gear, verified for accuracy.”
- Output: Detailed, safer (e.g., “Check local weather, wear sturdy boots…”).
- Breakdown: Highlight added constraints (“safety tips”), validation (“verified”), and specificity.
- **Discussion**: “What changed? How could we push this further—say, for a specific location?”
#### 0:40–0:70 | Exercise: Test and Refine for Reliability (30 minutes)
- **Setup**:
- Distribute “Reliability and Safety Checklist” (e.g., “Are facts verifiable? Is harm avoided?”).
- Step 1 (10 min): Test each prompt, note unreliable or unsafe elements in outputs.
- Step 2 (15 min): Rewrite prompts using at least two techniques (e.g., validation, constraints).
- Example Revision: “Suggest a beginner workout routine, verified by fitness guidelines, avoiding unsafe
exercises.”
- **Support**: Instructor circulates, offering tips (e.g., “Add a reasoning step here!”).
- **Activities**:
- Volunteers (2-3 students) share revised prompts and results (10 min).
- Instructor ties to Part 3: “Reliability is key for trustworthy AI—next session, we’ll test limits with edge
cases.”
- **Activities**:
- Recap: “You’ve started mastering reliable outputs—great foundation for advanced work!”
- Homework:
- Task: Pick a prompt from your midterm project. Refine it for reliability/safety, write a 1-paragraph
reflection.
- Preview: “Next time: Handling edge cases and adversarial prompts—bring your critical thinking!”
---
### Assessment
---
- **If time runs short**: Cut discussion to 10 min, summarize key takeaways.
- **If tech fails**: Use pre-prepared outputs for demo/exercise; focus on rewriting.
---
---
What do you think? Want to adjust timing, add more examples, or tweak anything else? I’m happy to
refine it further!