0% found this document useful (0 votes)
7 views

Week 11 Session 1 Lesson Plan

The lesson plan for Week 11, Session 21 of the Prompt Engineering Specialization course focuses on ensuring reliable and safe outputs in generative AI. Students will learn techniques for validating AI outputs through lectures, demonstrations, and hands-on exercises, aimed at preparing them for complex workflows and capstone projects. The session includes discussions on the importance of reliability, practical applications of prompt engineering, and a group reflection on the exercises completed.

Uploaded by

McKay Thein
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Week 11 Session 1 Lesson Plan

The lesson plan for Week 11, Session 21 of the Prompt Engineering Specialization course focuses on ensuring reliable and safe outputs in generative AI. Students will learn techniques for validating AI outputs through lectures, demonstrations, and hands-on exercises, aimed at preparing them for complex workflows and capstone projects. The session includes discussions on the importance of reliability, practical applications of prompt engineering, and a group reflection on the exercises completed.

Uploaded by

McKay Thein
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Below is a detailed lesson plan for **Week 11, Session 21: Ensuring Reliable and Safe Outputs**, the

first
session of **Part 3: Advanced Prompt Engineering and Capstone** in the "Prompt Engineering
Specialization" course. This session introduces advanced topics in trustworthy AI, focusing on techniques
to ensure generative AI outputs are reliable and safe. The plan is designed for a 90-minute session,
blending lecture, demonstration, and hands-on exercises for intermediate-to-advanced learners.

---

### Lesson Plan: Week 11, Session 21 - Ensuring Reliable and Safe Outputs

**Date**: Assumed to align with course timeline (e.g., early May 2025, based on a February 25, 2025
start date)

**Duration**: 90 minutes

**Level**: Advanced

**Prerequisites**: Completion of Parts 1 and 2 (Weeks 1-10), including midterm projects

**Objective**: Equip students with strategies to validate and enhance the reliability and safety of AI-
generated outputs, preparing them for complex workflows and capstone projects.

---

### Session Goals

By the end of this session, students will be able to:

1. Understand the importance of reliability and safety in generative AI applications.

2. Apply prompt engineering techniques to validate outputs and reduce errors or harmful content.

3. Test and refine prompts to ensure consistent, trustworthy results.

---

### Materials Needed

- Access to an AI platform (e.g., Grok, ChatGPT, or a course-provided sandbox).

- Slides or visual aids (examples of unreliable outputs, validation techniques).

- Handout: “Reliability and Safety Checklist” (digital or print).


- Sample prompts and outputs (pre-prepared for demo and exercise).

- Student laptops or devices for exercises.

---

### Lesson Plan Breakdown

#### 0:00–0:05 | Welcome and Introduction to Part 3 (5 minutes)

- **Objective**: Transition to advanced topics and set the session’s focus.

- **Activities**:

- Greeting: “Welcome to Week 11—we’re now in Part 3, where we level up to advanced prompt
engineering!”

- Quick recap: “You’ve built solid foundations and intermediate systems; now we focus on
trustworthiness and complexity.”

- Session preview: “Today, we’ll tackle how to make AI outputs reliable and safe—crucial for real-world
use.”

- **Transition**: “Let’s dive into why this matters.”

#### 0:05–0:25 | Lecture: Trustworthy AI and Validation Techniques (20 minutes)

- **Objective**: Explain concepts and methods for ensuring reliable, safe outputs.

- **Content**:

1. **Why Reliability and Safety Matter** (5 min)

- Risks of untrustworthy AI: Misinformation, inconsistency, harmful outputs (e.g., biased advice,
unsafe instructions).

- Real-world stakes: Business decisions, education, healthcare applications.

2. **Key Techniques** (15 min)

- **Explicit Constraints**: Instruct AI to avoid errors/harm (e.g., “Only provide factual data”).

- **Validation Prompts**: Ask AI to double-check itself (e.g., “Explain your reasoning” or “Verify this
answer”).

- **Redundancy**: Cross-check outputs with multiple prompts (e.g., rephrase and compare).
- **Safety Filters**: Prevent harmful content (e.g., “Do not generate violent or offensive text”).

- **Iterative Testing**: Run prompts multiple times to spot inconsistencies.

- **Visuals**: Show examples:

- Unreliable: “What’s the best diet?” → Vague, unverified claims.

- Reliable: “List evidence-based diet tips, citing sources” → Structured, factual.

- **Engagement**: Pause at 0:20 to ask, “Where have you seen unreliable AI outputs in your projects so
far?”

#### 0:25–0:40 | Demo: Applying Reliability Techniques (15 minutes)

- **Objective**: Model practical application of concepts.

- **Activity**:

- Instructor runs a live demo using an AI platform (e.g., Grok).

- Scenario: “Plan a safe hiking trip.”

- **Initial Prompt**: “Give me a hiking plan.”

- Output: Vague, potentially unsafe (e.g., no weather or gear advice).

- **Revised Prompt**: “Create a hiking plan for a beginner, including safety tips, weather checks, and
gear, verified for accuracy.”

- Output: Detailed, safer (e.g., “Check local weather, wear sturdy boots…”).

- Breakdown: Highlight added constraints (“safety tips”), validation (“verified”), and specificity.

- **Discussion**: “What changed? How could we push this further—say, for a specific location?”

#### 0:40–0:70 | Exercise: Test and Refine for Reliability (30 minutes)

- **Objective**: Practice techniques hands-on.

- **Setup**:

- Distribute “Reliability and Safety Checklist” (e.g., “Are facts verifiable? Is harm avoided?”).

- Provide 3 sample prompts with reliability risks:

1. “Write a summary of World War II.” (Risk: Inaccuracies)

2. “Suggest a workout routine.” (Risk: Unsafe advice)

3. “Explain how to fix a car engine.” (Risk: Incomplete/vague steps)


- **Instructions**:

- Solo or pairs (student choice).

- Step 1 (10 min): Test each prompt, note unreliable or unsafe elements in outputs.

- Step 2 (15 min): Rewrite prompts using at least two techniques (e.g., validation, constraints).

- Step 3 (5 min): Retest and compare results.

- Example Revision: “Suggest a beginner workout routine, verified by fitness guidelines, avoiding unsafe
exercises.”

- **Support**: Instructor circulates, offering tips (e.g., “Add a reasoning step here!”).

#### 0:70–0:85 | Group Discussion and Reflection (15 minutes)

- **Objective**: Share insights and connect to future work.

- **Activities**:

- Volunteers (2-3 students) share revised prompts and results (10 min).

- Qs: “What improved? What was still tricky?”

- Class discussion (5 min):

- “Which technique felt most effective?”

- “How might this apply to your capstone ideas?”

- Instructor ties to Part 3: “Reliability is key for trustworthy AI—next session, we’ll test limits with edge
cases.”

#### 0:85–0:90 | Wrap-Up and Homework (5 minutes)

- **Objective**: Close and prep for next steps.

- **Activities**:

- Recap: “You’ve started mastering reliable outputs—great foundation for advanced work!”

- Homework:

- Task: Pick a prompt from your midterm project. Refine it for reliability/safety, write a 1-paragraph
reflection.

- Due: Session 22.

- Preview: “Next time: Handling edge cases and adversarial prompts—bring your critical thinking!”
---

### Assessment

- **Participation (5 points)**: Engagement in discussion and exercise.

- **Exercise Output (informal)**: Effort in refining prompts (feedback in-class).

- **Homework (5 points)**: Completion and insightfulness (graded later).

---

### Contingency Plan

- **If time runs short**: Cut discussion to 10 min, summarize key takeaways.

- **If tech fails**: Use pre-prepared outputs for demo/exercise; focus on rewriting.

---

### Notes for Instructor

- Emphasize practical stakes (e.g., “Imagine this in a job!”) to motivate.

- Watch for overconfidence—some may assume AI is reliable without testing.

- Link to capstone: Encourage thinking about real-world trust needs.

---

What do you think? Want to adjust timing, add more examples, or tweak anything else? I’m happy to
refine it further!

You might also like