0% found this document useful (0 votes)
59 views

Logbook

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Logbook

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

AI PROJECT LOGBOOK

Resource for Students


(Adapted from “IBM EdTech Youth Challenge – Project Logbook” developed by IBM
in collaboration with Macquarie University, Australia and Australian Museum)

KEY PARTNERS

INDIA IMPLEMENTATION PARTNERS

GLOBAL PARTNERS
AI Project Logbook

PROJECT NAME: GENERAL CHATBOT

SCHOOL NAME: MAHARISHI VIDYA MANDIR

YEAR/CLASS: 2024-25/XII

TEACHER NAME: K REVATHI

TEACHER EMAIL:

TEAM MEMBER NAMES AND GRADES:

1. Vishwajith G

2. Naren Ragunandhan

3. A K Jeevesh

4. S Lokeshwaran

5. G Nithish
1. Introduction
This document is your Project Logbook, and it will be where you record your ideas,
thoughts and answers as you work to solve a local problem using AI.

Make a copy of the document in your shared drive and work through it digitally with your
team. You can also print a copy of the document and submit a scanned copy once you have
completed the Project Logbook. Feel free to add pages and any other supporting material to
this document.

Refer to the AI Project Guide for more details about what to do at each step of your project.

2. Team Roles
Who is in your team and what are their roles?

Role Role description Team Member Name

TO SCHEDULE AND ALLOCATE Vishwajith G


LEADER TASKS AMONG TEAM MEMBERS
FILLS LOGBOOK AND ACT AS A
LINK BETWEENTEACHER AND THE
TEAM MEMBERS.

Naren Ragunandhan
DESIGNER TO WORK WITH THE TEAM TO
DESIGN THE PROJECT PROTOTYPE.

INFORMATION COLLECTS QUESTIONS FROM


TEAM, FINDS ANSWERS AND A K Jeevesh
RESEARCHER
FORWARD IT TO TEAM LEADER

DATA EXPERT DECIDES ON WHAT TYPE OF DATA


TO WORK WITH TO TRAIN AN Al S Lokeshwaran
MODEL

TESTER WORKS WITH THE USERS TO TEST


THE PROTOTYPE AND GET G Nithish
FEEDBACK FROM THE USERS
Project plan

The following table is a guide for your project plan. You may use this or create your own
version using a spreadsheet which you can paste into this section. You can expand the
‘Notes’ section to add reminders, things that you need to follow up on, problems that
need to be fixed urgently, etc.

Phase Task Planned Planned Planned Actual Actual end Actual Who is responsible Notes/Re
start date end date duration start date date duration marks
(hours, (hours,
minutes) minutes)
Preparing for the Coursework, 15/5/2024 18/5/2023 2 hrs 15/5/2024 18/5/2024 2 hrs Naren
project readings

Set up a team 15/5/2024 15/5/2024 18/5/2024


folder 18/5/20233 15 min 25 min jeevesh
on a shared
drive
Defining the Problem 19/5/2024 19/5/2024 20 min 19/5/2024 19/5/2024 30 min lokesh
problem Definition
Research
issues in our 19/5/2024 19/5/2024 30 min 19/5/2024 19/5/2024 40 min lokesh
community

Team meeting
Discuss 22/5/2024
issues and 22/5/2024 20 min 22/5/2024 22/5/2024 20 min all team members
select an
issue for the
Project
Complete vishwajith,nithish
section 3 of
the Project 25/5/2024 25/5/2024 30 min 25/5/2024 25/5/2024 40 min
Logbook
Rate
Yourselves
Understanding Identify users 27/5/2024 27/5/2024 2 hrs 27/5/2024 27/5/2024 2 hrs vishwajith
the users

Meeting with
users to 29/5/2024 29/5/2024 29/5/2024 29/5/2024
1 day 1 day jeevesh, nithish
observe them

Interview 30/5/2024 30/5/2024 2 hrs 30/5/2024 30/5/2024 2 hrs naren


with user (1)
Interview
with user (2), 30/5/2024 30/5/2024 2 hrs 30/5/2024 30/5/2024 2 hrs jeevesh
etc…
Complete 10/6/2024 10/6/2024 10/6/2024 10/6/2024 vishwajith
section 4 of the 3 hrs 2 hrs 30 min
Project Logbook
Rate
yourselves

Brainstorming Team
meeting to
generate ideas 10/6/2024 10/6/2024 1 hrs 10/6/2024 10/6/2024 1 hrs All team members
for a solution
Complete
section 5 of the 11/6/2024 11/6/2024 11/6/2024 11/6/2024
2 hrs 2 hrs nithish
Project Logbook

Rate
yourselves
Designing Team
our solution meeting to 11/6/2024 11/6/2024 11/6/2024 11/6/2024
design the 2 hrs 2 hrs vishwajith,jeevesh
Solution
Complete
section 6 of 15/6/2024 15/6/2024 15/6/2024 15/6/2024 lokesh
8 hrs 10 hrs
the logbook
Rate
Yourselves
Collecting and Team meeting to 15/6/2024 15/6/2024 15/6/2024 15/6/2024
preparing data
discuss data 30 min 30 min vishwajith
requirements

Collecting and Data collection 15/6/2024 15/6/2024 1 hrs 15/6/2024 15/6/2024 1 hrs jeevesh
preparing data
Prototyping
Data preparation 15/6/2024 15/6/2024 1 hrs 15/6/2024 15/6/2024 1 hrs naren
and labeling

Complete 30 min 45 min lokesh


Section 6 of the 15/6/2024 15/6/2024 15/6/2024 15/6/2024
Project
Logbook
Team meeting to 15 min 15 min nithish
plan prototyping 15/6/2024 15/6/2024 15/6/2024 15/6/2024
phase

Prototyping Train your 4 hrs 4 hrs naren


Testing model with input 20/6/2024 21/6/2024 20/6/2024 21/6/2024
dataset
Test your model 21/6/2024 21/6/2024 4 hrs 21/6/2024 21/6/2024 4 hrs vishwajith
and keep
training with
more data until
you think your
model is
accurate

Complete 21/6/2024 21/6/2024 1 hrs 21/6/2024 21/6/2024 1 hrs lokesh


section 8 of the
Project
Logbook
Rate yourselves

Team 24/6/2024 24/6/2024 30 min 24/6/2024 24/6/2024 30 min All members


meeting to
discuss testing
plan
Completing Reflect on the 24/6/2024 24/6/2024 24/6/2024 24/6/2024 All members
the logbook
project with your
Team

Complete 25/6/2024 25/6/2024 25/6/2024 25/6/2024 All members


sections 10 and
11 of the Project
Logbook

Review your 25/6/2024 25/6/2024 25/6/2024 25/6/2024 All members


Project
logbook and
video
Submit your All members
Submission
entries on IBM.
Communications plan

● How will you plan to meet for discussion?

Online and Offline modes

● How often will you come together to share your progress?

Weekly 2-3 times.

● Who will set up online documents and ensure that everyone is contributing?

naren

● What tool will you use for communication?


Face to face, Google Drive, Whatsapp, Gmail

Team meeting minutes (create one for each meeting held)

● Date of meeting : 15/5/2024 Who attended: Everyone

● Purpose of meeting: To decide roles and responsibilities.

Topics discussed:
1. Project Topic - chatbot

2. Team Roles

3. Problem Definition

4. Communication plans
3. Problem Definition

Problem Overview
● Description: Define the primary reason for creating the chatbot. What business or user challenge does
it aim to solve?
Example: “Users face delays in getting customer support, leading to dissatisfaction and lost business
opportunities.”

2. Objective
● Goal: Clearly state the intended outcome of the chatbot.
Example: “The chatbot will automate customer service to provide instant responses to common
inquiries, thereby improving user experience and reducing support costs.”

3. Target Audience
● Who will use it?: Identify the end-users (e.g., customers, employees, students).
Example: “The chatbot is aimed at customers seeking technical support for software products.”

4. Key Challenges
● Current Problems: List existing pain points that the chatbot will alleviate.
Example:
○ Long response times from human agents.
○ Repetitive questions consuming agent time.
○ Users needing help outside business hours.

5. Scope and Requirements


● Features: Outline the functionality the chatbot must have to address the problem.
Example:
○ 24/7 availability.
○ Answer FAQs about products and services.
○ Handoff to human agents for complex issues.
○ Multilingual support.

6. Success Metrics
● How will you measure success?
Example:
○ Reduction in average response time by 70%.
○ 50% reduction in customer service emails.
○ 90% of inquiries resolved without human intervention.

7. Technical Constraints
● What limitations might impact the project?
Example:
○ Integration with existing customer support platforms.
○ NLP accuracy for understanding user queries.
○ Data privacy and security requirements.
Write your team’s problem statement in the format below.
By defining the problem clearly, you ensure that the chatbot is developed with a focused objective and
measurable success outcomes, addressing both user needs and business goals.

4. The Users
Who are the users and how are they affected by the problem?
everyone to make their work easy

What have you actually observed about the users and how the problem affects them?

1. Delayed Response Times

● Observation: Users often wait too long for a response to their inquiries, especially during peak hours or
outside business hours.
● Impact: This leads to frustration, reduced customer satisfaction, and users abandoning inquiries or
escalating issues through alternative, costlier support channels (e.g., phone calls).

2. Repetitive Queries

● Observation: A large number of user queries are repetitive, such as basic troubleshooting steps, account
management, or common product FAQs.
● Impact: Support agents are overwhelmed with answering the same questions multiple times, which limits
their availability to address more complex, high-priority issues. Users feel frustrated when they experience
delays for simple information.

3. Limited Availability

● Observation: Customer support is only available during specific hours, and users who need assistance
outside of this timeframe are left without help.
● Impact: Users who require assistance in different time zones or during off-hours are unable to get immediate
answers, leading to dissatisfaction and negative brand perception.

4. User Preferences for Instant Help

● Observation: Users prefer self-service options and instant responses for simple issues rather than waiting
for human agents.
● Impact: The inability to provide quick, automated solutions for basic queries results in lost opportunities for
quick resolutions and user engagement drops.

5. Complex Queries Needing Human Intervention

● Observation: While some users have simple queries that can be automated, others have more complex
issues that require human intervention.
● Impact: Users get frustrated when their complex queries are handled poorly by automated systems,
resulting in negative experiences and the need for proper escalation processes to human agents.

6. Inconsistent Customer Experience Across Channels


● Observation: Users experience different levels of service and response quality depending on the channel
(e.g., email vs. live chat vs. phone support).
● Impact: This inconsistency creates confusion and leads to a lack of trust in the support system, as users
expect the same level of service regardless of the channel they use.

Record your interview questions here as well as responses from users.

1. Delayed Response Times

● Observation: Users often wait too long for a response to their inquiries, especially during
peak hours or outside business hours.
● Impact: This leads to frustration, reduced customer satisfaction, and users abandoning
inquiries or escalating issues through alternative, costlier support channels (e.g., phone
calls).

2. Repetitive Queries

● Observation: A large number of user queries are repetitive, such as basic troubleshooting
steps, account management, or common product FAQs.
● Impact: Support agents are overwhelmed with answering the same questions multiple
times, which limits their availability to address more complex, high-priority issues. Users
feel frustrated when they experience delays for simple information.

3. Limited Availability

● Observation: Customer support is only available during specific hours, and users who
need assistance outside of this timeframe are left without help.
● Impact: Users who require assistance in different time zones or during off-hours are
unable to get immediate answers, leading to dissatisfaction and negative brand perception.

4. User Preferences for Instant Help

● Observation: Users prefer self-service options and instant responses for simple issues
rather than waiting for human agents.
● Impact: The inability to provide quick, automated solutions for basic queries results in lost
opportunities for quick resolutions and user engagement drops.

5. Complex Queries Needing Human Intervention

● Observation: While some users have simple queries that can be automated, others have
more complex issues that require human intervention.
● Impact: Users get frustrated when their complex queries are handled poorly by automated
systems, resulting in negative experiences and the need for proper escalation processes
to human agents.

6. Inconsistent Customer Experience Across Channels

● Observation: Users experience different levels of service and response quality depending
on the channel (e.g., email vs. live chat vs. phone support).
● Impact: This inconsistency creates confusion and leads to a lack of trust in the
support system, as users expect the same level of service regardless of the channel
they use.
Empathy Map

Map what the users say, think, do and feel about the problem in this table

What our users are saying What our users thinking

- "It takes too long to get a response." - "Why do I have to wait for something that
- "I just need a simple answer, why is this so should be quick?"
difficult?" - "Is there a faster way to get this information?"
- "I can't get help outside of business hours." - "Why can't I resolve this on my own?"

What our users are doing How our users feel

- Repeatedly sending inquiries or switching - Frustrated by long wait times.


between support channels (e.g., email, live chat, - Impatient when seeking answers to simple
phone). questions.
- Abandoning inquiries if not addressed quickly. - Dissatisfied when they can't reach support
- Trying to find answers through self-help outside of work hours.
resources like FAQs or forums. - Anxious or stressed when they can't get
immediate help for urgent issues.
What are the usual steps that users currently take related to the problem and where are the
difficulties?

1. Step 1: Identifying the Issue


○ Action: Users encounter a problem with a product or service (e.g., technical issue, billing
question).
○ Difficulty: Users may not know where to start or what support channel to use, causing
confusion and delays.
2. Step 2: Searching for Help
○ Action: Users often search for help on their own, typically by browsing FAQs, help articles, or
forums.
○ Difficulty: These resources may be outdated, hard to navigate, or not specific enough to
address their issue. Users may also find it challenging to locate the right information, especially
if their query is complex.
3. Step 3: Contacting Support
○ Action: When self-help fails, users contact customer support through email, chat, or phone.
○ Difficulty:
■ Long wait times: During peak hours or outside business hours, users experience
significant delays in getting responses.
■ Repetitive processes: Users often have to provide the same information multiple times
(e.g., through chatbots or forms) before reaching a human agent.
4. Step 4: Interacting with Support
○ Action: Users engage with customer service agents or automated systems (e.g., chatbot or
IVR).
○ Difficulty:
■ Inefficient chatbot interactions: When interacting with a chatbot, users may experience
poor responses to complex questions, forcing them to escalate to a human.
■ Lack of personalization: Automated responses may feel generic and not tailored to the
user’s specific situation, leading to frustration.
5. Step 5: Resolution or Escalation
○ Action: If the problem is resolved, the user leaves the support process. If not, they may
escalate the issue to a higher-level agent.
○ Difficulty:
■ Delayed escalations: If the chatbot or lower-level support fails to resolve the issue, users
experience further delays as the problem is passed to human agents.
■ Lack of follow-up: In some cases, users feel abandoned if they don’t receive timely
updates on their escalated issue.
6. Step 6: Post-Support Feedback
○ Action: Users may be asked to rate their experience after the issue is resolved.
○ Difficulty: Negative experiences with long wait times, ineffective chatbots, or unresolved issues
lead to lower feedback scores and dissatisfaction.

Write your team’s problem statement in the format below.

Key Difficulties:

● Difficulty in finding relevant information quickly (especially in self-help).


● Long response times from human agents or inefficient chatbot handling.
● Repetitive steps, like re-entering the same information across different channels.
● Inadequate chatbot capability for complex issues, leading to the need for human
intervention.
● Unavailability of support during off-hours, which leaves users without immediate
assistance.

5. Brainstorming
Ideas

How might you use the power of AI/machine learning to solve the users’ problem by increasing
their knowledge or improving their skills?

AI Idea #1 AI-driven chatbots can instantly answer common questions using natural language
processing (NLP) to interpret user queries and provide accurate, context-aware
responses.

AI Idea #2 Machine learning algorithms can analyze past user interactions and tailor responses
based on individual preferences and previous queries.

AI Idea #3 AI can diagnose user issues by analyzing patterns in their behavior or input (e.g.,
error messages, troubleshooting steps). The chatbot can suggest proactive solutions
or provide interactive guides for resolving issues in real-time.

AI Idea #4 AI chatbots can use ML to continuously learn from user feedback and new data
sources, adapting their responses and improving their problem-solving capabilities
over time.

AI Idea #5 AI can provide step-by-step interactive tutorials or guides that help users develop
their skills in using a product or solving recurring issues. For instance, the chatbot
can walk users through advanced product features or troubleshooting methods.
6.Design
What are the steps that
users will now do using
your AI solution to
address the problem?
6. Data

What data will you need to train your AI solution?

1. Historical customer queries (chat logs, emails, etc.).


2. Knowledge base content (FAQs, guides, documentation).
3. User profiles and preferences for personalization.
4. Chatbot feedback and ratings to improve performance.
5. Escalation patterns to learn when to hand off to humans.
6. Product/service usage data to predict and solve common issues.

Where or how will you source your data?

Where will Do you have Ethical


the data Who owns the permission to considerations
Data needed come from? data? use the data?
PAST RECORDS Public dataset YES SHOULD BE
Have
AUTHENTIC
IDENTIFICATION Public dataset YES SHOULD BE
Want/Need
ACCURATE
& AUTHENTIC
AI MODELS
Nice to have
7. Prototype

Which AI tool(s) will you use to build your prototype?

To build the prototype, I would use:

1. Dialog Flow or Rasa for natural language understanding and chatbot development.
2. TensorFlow or PyTorch for machine learning model training.
3. Amazon Lex or Microsoft Bot Framework for integration and deployment.

Which AI tool(s) will you use to build your solution?

To build the solution, I would use:

1. Dialog Flow or Rasa for natural language processing and chatbot framework.
2. TensorFlow or PyTorch for developing and training machine learning models.
3. Amazon Lex or Microsoft Bot Framework for integration and deployment.
4. MongoDB or Firebase for user data storage and management.

What decisions or outputs will your tool generate and what further action needs to be taken
after a decision is made?

The AI tool will generate:

1. User Intent Identification: Classifying user queries to determine the intent.


○ Further Action: Provide relevant responses or escalate to a human agent if necessary.
2. Suggested Solutions: Offering troubleshooting steps or articles based on the identified issue.
○ Further Action: Users can follow the provided guidance or request additional help.
3. Feedback Analysis: Collecting user feedback on responses for continuous improvement.
○ Further Action: Update the training data and adjust algorithms based on feedback trends.
4. Escalation Triggers: Deciding when to hand off complex issues to human support.
○ Further Action: Notify human agents with context about the user’s issue for seamless support.
8. Testing
Who are the users who tested the prototype?

Internal Team Members: Employees from customer support, product management, and
development teams who provide initial feedback on usability and functionality.

Beta Testers: Selected users from the target audience who volunteer to try the chatbot and
share their experiences, focusing on ease of use and effectiveness.

List your observations of your users as they tested your solution.

Ease of Use: Most users found the chatbot interface intuitive and easy to navigate.
Response Time: Users appreciated the quick responses but noted occasional delays with
complex queries.
Query Understanding: Some users experienced frustration when the chatbot misinterpreted
their questions, highlighting the need for improved NLP.
Helpfulness of Suggestions: Many found the provided solutions helpful, while others felt the
information was too generic for specific issues.
Escalation Process: Users were generally satisfied with the escalation process but wanted
clearer communication on response times when handed off to human agents.

Complete the user feedback grid

What works What needs to change

Intuitive interface Improve NLP for better understanding of queries

Quick response times for simple queries Reduce response time for complex queries

Helpful suggestions for common issues Make solutions more tailored to specific
problems
Smooth escalation process
Provide clearer communication during escalation
Positive feedback on overall experience
Simplify the feedback mechanism

Questions? Ideas

How can we further personalize responses? Add a guided tutorial for first-time users

What types of queries are most frustrating? Implement a suggestion feature for rephrasing
queries
How do users feel about the escalation process?
Create a feedback loop for continuous
What additional features would enhance user improvement
experience?
Introduce interactive troubleshooting guides
Are users aware of all features available?
Gamify the feedback process to encourage user
participation
Refining the prototype: Based on user testing, what needs to be acted on now so that
the prototype can be used?

What improvements can be made later?


9. Team collaboration
How did you actively work with others in your team and with stakeholders?
10. Individual learning reflection
11.1. Team Reflections

A good way to identify what you have learned is to ask yourself what surprised you during
the project. List the things that surprised you and any other thoughts you might have on
issues in your local community.

Team member name:

Team member name:

Team member name:

Team member name:


Team member name:

11. Video link

Enter the URL of your team video:


Appendix
Recommended Assessment Rubric (for Teachers)

LOGBOOK AND VIDEO CONTENT


Steps 3 points 2 points 1 point Points
Given
Problem A local problem which has not A local problem which has not A local problem is
definition been fully solved before is been fully solved before is described
explained in detail with described.
supporting research.

The Users Understanding of the user group Understanding of the user The user group is
is evidenced by completion of all group is evidenced by described but it is unclear
of the steps in Section 4 The completion of most of the how they are affected by
Users and thorough steps in Section 4 The Users. the problem.
investigation.

Brainstorming A brainstorming session was A brainstorming session was A brainstorming session


conducted using creative and conducted using creative and was conducted. A solution
critical thinking. A compelling critical thinking. A solution was was selected.
solution was selected with selected with supporting
supporting arguments from arguments in Section 5
Section 5 Brainstorming. Brainstorming.

Design The use of AI is a good fit for the The use of AI is a good fit for The use of AI is a good fit
solution. The new user the solution and there is some for the solution.
experience is clearly documentation about how it
documented showing how users meets the needs of users.
will be better served than they
are today.
Data Relevant data to train the AI Relevant data to train the AI Relevant data to train the
model have been identified as model have been identified as AI model have been
well as how the data will be well as how the data will be identified as well as how
sourced or collected. There is sourced or collected. There is the data will be sourced or
evidence that the dataset is evidence that the dataset is collected.
balanced, and that safety and balanced.
privacy has been considered.
Prototype A prototype for the solution has A prototype for the solution A concept for a prototype
been created and successfully has been created and trained. shows how the AI model
trained to meet users’ will work
requirements.
Testing A prototype has been tested A prototype has been tested A concept for a prototype
with a fair representation of with users and improvements shows how it will be
users and all tasks in Section 9 have been identified to meet tested.
Testing has been completed. user requirements.
Team Effective team collaboration and Team collaboration among There is some evidence of
collaboration communication among peers peers and stakeholders is team interactions among
and stakeholders is clearly clearly documented in Section peers and stakeholders.
documented in Section 10 Team 10 Team collaboration.
collaboration.
Individual Each team member presents a Each team presents an Some team members
learning reflective and insightful account account of their learning present an account of their
of their learning during the during the project. learning during the project.
project.

Total points
VIDEO PRESENTATION
Points Given
3 – excellent
Criteria 2 – very good
1 – satisfactory

Communication The video is well-paced and communicated, following a clear


and logical sequence.

Demonstrations and/or visuals are used to


Illustrative
illustrate examples, where appropriate.

Accurate The video presents accurate science and technology and


language uses appropriate language.

The video demonstrates passion from team members about


Passion
their chosen topic/idea.

Sound and
image quality
The video demonstrates good sound and image quality.

The content is presented in the video within a 3-minute


Length
timeframe.

Total points

You might also like