0% found this document useful (0 votes)
37 views67 pages

Ai Samvadini an Intelligent Interviewer Development Guide

AI Samvadini is an intelligent podcast interviewer designed to generate real-time questions based on guest profiles, enhancing the interview experience through contextual understanding and dynamic interaction. The development guide outlines the technical feasibility, operational needs, and economic considerations for creating this AI tool, including its core functionalities and required technologies. The project aims to provide a user-friendly interface for podcasters, allowing them to conduct personalized interviews efficiently.

Uploaded by

MrPyae
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views67 pages

Ai Samvadini an Intelligent Interviewer Development Guide

AI Samvadini is an intelligent podcast interviewer designed to generate real-time questions based on guest profiles, enhancing the interview experience through contextual understanding and dynamic interaction. The development guide outlines the technical feasibility, operational needs, and economic considerations for creating this AI tool, including its core functionalities and required technologies. The project aims to provide a user-friendly interface for podcasters, allowing them to conduct personalized interviews efficiently.

Uploaded by

MrPyae
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

AI Samvadini

An Intelligent Interviewer: Development Guide.


Kautilya Utkarsh Kumar Mishra

All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in
any form or by any means, including photocopying, recording, or other electronic or mechanical
methods, without the prior written permission of the publisher, except in the case of brief
quotations embodied in critical reviews and certain other noncommercial uses permitted by
copyright law. Although the author/co-author and publisher have made every effort to ensure
that the information in this book was correct at press time, the author/co-author and publisher do
not assume and hereby disclaim any liability to any party for any loss, damage, or disruption
caused by errors or omissions, whether such errors or omissions result from negligence,
accident, or any other cause. The resources in this book are provided for informational purposes
only and should not be used to replace the specialized training and professional judgment of a
health care or mental health care professional. Neither the author/co-author nor the publisher
can be held responsible for the use of the information provided within this book. Please always
consult a trained professional before making any decision regarding the treatment of yourself or
others.

Author – Kautilya Utkarsh Kumar Mishra


Publisher – C# Corner
Editorial Team – Deepak Tewatia, Baibhav Kumar
Publishing Team – Praveen Kumar
Promotional & Media – Rohit Tomar

https://www.c-sharpcorner.com/ebooks/ 2
Educational background
Kautilya Utkarsh completed a Bachelor of Science (B.sc) from MMM PG College, affiliated with
Deen Dayal Upadhyaya Gorakhpur University, in the year 2022. During his undergraduate
studies, he gained a strong foundation in Higher Mathematics, Chemistry and Physics. Kautilya
Utkarsh pursued a Master of Computer Application (MCA) from ABES Engineering College,
Ghaziabad affiliated with Dr. APJ Abdul Kalam Technical University, Lucknow, graduating in the
year 2024. Throughout his master's program, he delved deeper into advanced topics such as
artificial intelligence and machine learning. Additionally, he honed his skills in software
engineering principles and system architecture. These academic pursuits have made him well-
prepared to excel in professional roles within the tech industry.

https://www.c-sharpcorner.com/ebooks/ 3
Acknowledgment
This book is dedicated to author Kautilya Utkarsh’s father Prof. (Dr) Shrinivas Mishra and
mother, Mrs. Anita Mishra, for their unwavering love, endless support, and the countless
sacrifices they have made to shape the person I am today. Your wisdom, encouragement, and
belief in me have been the foundation of all my endeavors. This book is a testament to your
enduring strength and boundless generosity. With all my love and gratitude, I dedicate this work
to you.

Special Thanks
Kautilya Utkarsh extends his heartfelt gratitude to the following individuals who have supported
throughout the journey of creating this book. Mr. Mahesh Chand, Mr. Bhasker Das, Mr. Rohit
Gupta, Prof. (Dr) Devendra Kumar, Prof. (Dr) Shikha Verma, Asst. Prof. Priya Mishra, Asst. Prof.
Meghna Gupta and Asst. Prof. Surbhi Sharma.

Mentors and Guide


Mr. Bhasker Das and Mr. Rohit Gupta for their guidance and valuable insights that have
enriched the content of this book.

— Kautilya Utkarsh

https://www.c-sharpcorner.com/ebooks/ 4
Table of Contents:
Introduction to AI Samvadini ............................................................................................................. 6
Development Overview ..................................................................................................................... 8
Feasibility Study ...............................................................................................................................11
Requirements...................................................................................................................................14
Code and Explanation .......................................................................................................................19
Future Scope ....................................................................................................................................65

https://www.c-sharpcorner.com/ebooks/ 5
1
Introduction to AI
Samvadini

Overview

In this chapter, we explore AI Samvadini, an intelligent podcast


interviewer that generates real-time questions based on guest profiles,
offering seamless interaction and feedback for personalized interviews.

https://www.c-sharpcorner.com/ebooks/ 6
AI Samvadini is an intelligent podcast interviewer which is capable to organise a podcast with
any guest by asking questions in the real time and like a real interviewer. It provides a voice
interaction i.e. you can talk to AI Samvadini! If you prefer speaking over the typing just use your
microphone to answer the questions.

Problem Statement for the AI Samvadini


An intelligent podcast interviewer that generates engaging and dynamic questions in real time.

Key features include:


• Contextual Understanding: where the AI analyse the conversation to create questions
that fit smoothly into the interview.
• Guest Profiling: where the AI learns about the guest's background, expertise, and
interests to ask specific relevant questions.
• Dynamic Interaction: where the AI adapts to the interview's direction and provides
follow-up questions based on the guest's answers.
• Content Diversity: where the AI supports a wide range of topics, allowing podcasters to
explore various subjects while maintaining the quality of the interview.

What AI Samvadini Do?


• Asks Interview Questions: AI Samvadini will ask you questions based on the job profile
you are working. For example, if you are a software engineer, it will ask you questions
about programming and technology.
• Guideline Support: It follows a set of interview guidelines to make sure the questions
are relevant and helpful.
• Provides Feedback: After your answers, AI Samvadini can give you feedback on how
well you did. This helps you understand where you can improve.
• Resume Screening: AI Samvadini can read your resume and ask questions related to
your experience and skills.
With AI Samvadini, you can start a podcast anytime without waiting for a real interviewer,
making it a convenient tool that you can use from the comfort of your home. It provides
personalized questions tailored to your specific job profile.

How Does It Work?


1. Upload Your Resume: Begin by uploading your resume to the system.
2. Write Your Job Position: Choose the job position you are applying for from the
provided options.
3. Start the Interview: AI Samvadini will ask you questions related to your job position.
Answer these questions as you would in a real interview.
4. Get Feedback: Receive immediate feedback on your performance. You can also
download the feedback for your reference and further improvement.

https://www.c-sharpcorner.com/ebooks/ 7
2
Development Overview

Overview

In this chapter, we will do a theoretical discussion on how you can


develop AI Samvadini. This chapter like a briefing before starting the
development of an Intelligent Podcast interviewer.

https://www.c-sharpcorner.com/ebooks/ 8
The all the key feature of this problem statement can be achieved as follows:

1. Contextual Understanding (Solution):


• Natural Language Processing (NLP) Models: Implement advanced NLP techniques
using transformer-based models (e.g., GPT-3.5 Turbo or GPT-4) to understand and
analyze the context of ongoing conversations.
• Real-Time Context Analysis: Continuously process the conversation to maintain
context and generate questions that are relevant and smoothly fit into the dialogue.
• Semantic Analysis: Use semantic analysis to grasp the deeper meaning of the
conversation, ensuring the AI comprehends nuances and subtleties.

Implementation Steps:
1. Train NLP models on extensive datasets of interviews and conversational dialogues.
2. Integrate real-time processing capabilities to analyse live conversations.
3. Continuously update and fine-tune the models based on feedback and performance
metrics.

2. Guest Profiling (Solution):


• Data Aggregation: Collect and aggregate data from various sources such as social
media profiles, professional networks (e.g., LinkedIn), past interviews, and articles.
• Machine Learning Algorithms: Develop machine learning algorithms to process and
analyse the collected data, identifying key areas of interest and expertise of the guest.
• Profile Database: Create a comprehensive database that stores detailed profiles of
guests, which the AI can access to generate personalized questions.

Implementation Steps:
1. Implement web scraping and API integration to gather guest data.
2. Develop machine learning models to analyse and extract relevant information.
3. Ensure data privacy and compliance with regulations while storing and processing guest
profiles.

3. Dynamic Interaction (Solution):


• Adaptive Question Generation: Design the AI to adapt questions in real-time based on
the flow of the conversation and guest responses.
• Reinforcement Learning: Utilize reinforcement learning to improve the AI's adaptability,
learning from each interaction to enhance future performance.
• Feedback Loop: Implement a feedback loop where the AI evaluates the effectiveness of
its questions and adjusts its approach accordingly.

Implementation Steps:
1. Integrate reinforcement learning algorithms to enable the AI to learn and adapt.
2. Continuously monitor the conversation to adjust the question generation process in real-
time.
3. Collect feedback and performance data to refine the AI's adaptive capabilities.

https://www.c-sharpcorner.com/ebooks/ 9
4. Content Diversity (Solution):
• Knowledge Base Integration: Develop a comprehensive knowledge base that covers a
wide range of topics, ensuring the AI can support diverse subject matter.
• Regular Updates: Regularly update the knowledge base with new information, trends,
and developments to keep the AI current.
• Topic Exploration Algorithms: Implement algorithms that allow the AI to explore
various topics, generating questions that maintain the quality and depth of the interview.

Implementation Steps:
1. Create and maintain a robust knowledge base with extensive information on various
subjects.
2. Design algorithms that can efficiently access and utilize the knowledge base during
interviews.
3. Set up a system for regular updates and additions to the knowledge base to ensure it
remains relevant.

5. User Interface and Experience (Solution):


• Intuitive Dashboard: Develop a user-friendly dashboard for podcasters to interact with
the AI, input guest information, select topics, and review suggested questions.
• Customization Options: Provide options for podcasters to customize the AI’s behaviour,
such as setting the tone of questions or focusing on specific areas of interest.
• Real-Time Monitoring: Enable podcasters to monitor the AI's performance in real-time,
making adjustments as needed during the interview.

Implementation Steps:
1. Design a clean and intuitive user interface for the dashboard.
2. Implement customization features to allow podcasters to tailor the AI’s behaviour.
3. Develop real-time monitoring tools to track the AI’s performance and make necessary
adjustments on the fly.

https://www.c-sharpcorner.com/ebooks/ 10
3
Feasibility Study

Overview

In this chapter, we explore the feasibility of the AI Samvadini project,


focusing on technical requirements, operational needs, and economic
considerations. This summary will guide you in assessing the project's
viability.

https://www.c-sharpcorner.com/ebooks/ 11
Technical Feasibility
1. System Architecture:
Implement advanced NLP algorithms to understand and analyse the conversation context.
Utilise machine learning models, particularly transformer-based models like GPT3.5 Turbo or
GPT-4, to generate relevant and engaging questions. Develop a system to gather and process
guest information from various sources (social media, professional profiles, previous interviews)
to build a comprehensive profile. Design a module that can adjust questions based on real-time
conversation flow and guest responses. Integrate a knowledge base covering a wide range of
topics to ensure the AI can handle diverse subjects.

2. Data Requirements:
Gather extensive datasets of interviews, Q&A sessions, and conversational dialogues to train
the NLP and machine learning models. Access to public databases and APIs to collect
background information on guests.

3. Technical Challenges:
Ensuring the AI accurately understands and follows the context of the conversation. Maintaining
low latency to generate questions in real-time. Handling sensitive guest data responsibly and
ensuring data privacy and security.

4. Technology Stack:
• Programming Languages: Python, JavaScript
• Frameworks and Libraries: TensorFlow, PyTorch, LangChain, NLTK, spaCy
• Cloud Services: AWS.
• Databases: NoSQL databases like MongoDB for flexible data storage and retrieval

Operational Feasibility
1. Development Team:
• AI Researchers: Specialists in NLP and machine learning to develop and fine-tune
models.
• Software Engineers: To build and integrate system components.
• Data Scientists: To gather, preprocess, and manage training data.
• Project Managers: To oversee development timelines and ensure milestones are met.
If you have knowledge of above all mentioned then you can complete this project alone.

2. Development Timeline:
• Phase 1 (0-10 Days): Initial research, requirement analysis, and system design.
• Phase 2 (11-26 Days): Development of NLP models and guest profiling system.
• Phase 3 (27-35 Days): Integration of dynamic interaction module and content diversity
engine.
• Phase 4 (35-45 days): Testing, refinement, and deployment.

3. Operational Challenges:
• Scalability: Ensuring the system can handle multiple concurrent interviews.
• User Training: Educating podcasters on how to use the AI effectively.

https://www.c-sharpcorner.com/ebooks/ 12
• Maintenance: Regular updates and maintenance to keep the system performing
optimally and to address any issues.

Economical Feasibility
This will be required only when, when you want to developed this at a large scale. Otherwise,
you can ignore this.

1. Initial Investment:
• Development Costs: Salaries for AI researchers, software engineers, data scientists,
and project managers.
• Infrastructure Costs: Expenses for cloud services, development tools, and data
acquisition.
• Marketing and Launch Costs: Budget for marketing campaigns and initial launch
efforts.

2. Operational Costs:
• Cloud Services: Ongoing costs for cloud infrastructure to support AI processing and
data storage.
• Maintenance and Support: Salaries for support staff and costs for system maintenance.

3. Revenue Generation:
• Subscription Model: Offer the AI interviewer as a subscription service to podcasters.
• Licensing: License the technology to other platforms and services.

5. Risk Analysis:
• Market Adoption: Risk of slower than expected market adoption.
• Technological Risks: Potential challenges in achieving desired AI performance and
accuracy.
• Competition: Risk of competitors developing similar or superior solutions.

https://www.c-sharpcorner.com/ebooks/ 13
4
Requirements

Overview

In this chapter, we explore the essential requirements for the AI


Samvadini project, including core functionalities, performance
standards, and necessary software and tools. This overview
provides a clear foundation for development and maintenance.

https://www.c-sharpcorner.com/ebooks/ 14
Functional Requirements
This project has following functional requirements:

1. Resume Upload and Processing


• Users must be able to upload their resumes in PDF format.
• The system must extract text from the uploaded resume.
• The text must be split into manageable chunks for processing.

2. Text Embeddings and Vector Storage


• The system must convert the extracted resume text into embeddings using OpenAI
embeddings.
• The embeddings must be stored in a vector database (FAISS) for efficient retrieval.

3. Interview Simulation
a. The system must simulate an interview process based on either a predefined guideline or a
resume.
b. The interview simulation must be able to:
• Ask questions based on the resume or job description.
• Provide feedback on user responses.
• Handle both text and voice inputs from the user.

4. Speech Synthesis and Recognition


• The system must support speech synthesis to convert text responses into audio using
AWS Polly.
• The system must support speech recognition to transcribe user audio input into text
using OpenAI Whisper.

5. Interview Feedback and Guideline


• The system must generate and display interview guidelines based on job descriptions or
resumes.
• The system must provide feedback on the interview performance and allow users to
download the feedback as a text file.

6. User Interface
a. The application must have a user-friendly interface allowing:
• Resume upload.
• Input of text or voice responses.
• Interaction with the interview simulation (start/stop, view feedback, etc.).
b. The interface must provide visual indicators of progress and status (e.g., percentage
completed).

7. Error Handling
• The system must handle errors gracefully and provide meaningful error messages to
users.
• Errors in resume upload, audio recording, or text processing should be clearly
communicated to the user.

https://www.c-sharpcorner.com/ebooks/ 15
8. Data Management and Privacy
• The system must ensure that user data, including resumes and responses, is stored
securely.
• Personal data must be handled according to relevant data protection regulations and
privacy policies.

9. System Integration
• The system must integrate with external services such as AWS Polly for text-to-speech
and OpenAI for embeddings and transcription.
• It must use appropriate APIs and manage API keys securely.

10. Scalability and Performance


• The system must be scalable to handle multiple users and large volumes of text or audio
data.
• Performance should be optimized for quick response times in both text and voice
interactions.

11. Session Management


• The system must manage user sessions to maintain context and history during
interviews.
• Session state should be preserved across user interactions to ensure continuity.

12. Accessibility and Compatibility


• The application must be accessible on various devices and platforms (e.g., desktop,
mobile).
• It should support multiple browsers and handle compatibility issues effectively.
• These requirements outline the core functionalities and considerations needed to build
and maintain the project, ensuring it meets user needs and operates effectively.

Non-Functional Requirement
The project has following non-functional requirement.

1. Performance
• Response Time: The system should respond to user interactions (e.g., answering
questions, generating feedback) within 2-3 seconds to ensure a smooth user experience.
• Scalability: The system must handle an increasing number of users and requests
without significant degradation in performance, supporting both peak loads and normal
loads efficiently.

2. Reliability
• Availability: The system should have an uptime of 99.9%, ensuring that it is available
and operational for users most of the time.
• Fault Tolerance: The system should be designed to handle failures gracefully, with
mechanisms for error recovery and minimal disruption to users.

https://www.c-sharpcorner.com/ebooks/ 16
3. Security
• Data Encryption: Sensitive data, including resumes and personal information, must be
encrypted both at rest and in transit to protect against unauthorized access.
• Authentication and Authorization: Implement robust authentication and authorization
mechanisms to ensure that only authorized users can access and modify their data.

4. Usability
• User Interface: The user interface should be intuitive and easy to navigate, with clear
instructions and feedback to guide users through the interview process and other
functionalities.
• Accessibility: The system should be accessible to users with disabilities, complying with
accessibility standards such as WCAG (Web Content Accessibility Guidelines).

5. Maintainability
• Code Quality: The system's code should be well-documented, modular, and follow best
practices to facilitate ease of maintenance and future updates.
• Error Handling: Implement comprehensive error handling and logging to assist in
diagnosing and fixing issues efficiently.

6. Portability
• Cross-Platform Compatibility: The system should be compatible with major web
browsers (e.g., Chrome, Firefox, Safari) and support multiple operating systems (e.g.,
Windows, macOS, Linux).

7. Data Integrity
• Data Validation: Ensure that data entered by users is validated for correctness and
completeness to prevent errors and inconsistencies.
• Backup and Recovery: Implement regular backups and a robust recovery plan to
protect against data loss or corruption.
These non-functional requirements help ensure that the system not only meets functional needs
but also provides a reliable, secure, and user-friendly experience.

Software Requirements
The software requirements for this project are as follows:

1. Operating System
• Server: Linux (e.g., Ubuntu) or Windows Server, depending on deployment preferences.
• Client: Cross-platform support for major operating systems (e.g., Windows, macOS,
Linux).

2. Web Framework
• Streamlit: Used for building the interactive web application and user interface.

3. Programming Languages
• Python: Primary language for backend development, handling business logic, data
processing, and integration with APIs.

https://www.c-sharpcorner.com/ebooks/ 17
4. Data Processing and Machine Learning Libraries
• LangChain: For building conversational agents and handling natural language
processing tasks.
• OpenAI API: For interacting with OpenAI's language models (e.g., GPT-3.5) and
performing tasks like text generation and embeddings.
• NLTK: For natural language processing tasks, such as text splitting and tokenization.
• PyPDF2: For extracting text from PDF resumes.

5. Database and Storage


• FAISS: For similarity search and managing vector embeddings.
• S3 or equivalent cloud storage service: For storing user resumes and other files
securely.

6. Speech Processing
• AWS Polly: For text-to-speech synthesis.
• OpenAI Whisper: For speech-to-text transcription.

7. APIs and External Services


• AWS SDK (Boto3): For interacting with AWS services (e.g., Polly for text-to-speech).
• OpenAI API: For accessing OpenAI models for various language and conversation
tasks.

8. Development Tools
• Integrated Development Environment (IDE): Such as PyCharm, VSCode, or Jupyter
Notebook for development and debugging.
• Version Control System: Git for source code management and collaboration (e.g.,
GitHub or GitLab).
These software requirements will guide the development, deployment, and maintenance of the
system, ensuring that it meets the project's functional and non-functional needs effectively.

https://www.c-sharpcorner.com/ebooks/ 18
5
Code and Explanation

Overview

In this chapter, we explore complete code for AI Samvadini,


accompanied by a thorough explanation of each section. We will
walk through the code step by step, highlighting the key
components and their functions, to help you understand how AI
Samvadini operates and how each part contributes to its
performance as a sophisticated podcast interviewer.

https://www.c-sharpcorner.com/ebooks/ 19
Homepage.py
Firstly we need to create a homepage from where we can navigate to other pages and can know
all the things
import streamlit as st
from streamlit_option_menu import option_menu
from app_utils import switch_page
from PIL import Image
• streamlit as st: Imports the Streamlit library, allowing you to build interactive web
applications.
• from streamlit_option_menu import option_menu: Imports a custom component from
the streamlit_option_menu package to create a styled menu.
• from app_utils import switch_page: Imports a function named switch_page from a
custom module app_utils, which is used to handle page navigation within the app.
• from PIL import Image: Imports the Python Imaging Library (PIL) to handle image
operations, such as loading and displaying images.
im = Image.open("icon3.png")
st.set_page_config(page_title = "AI Samvadini", layout =
"centered",page_icon=im)
Image.open("icon.png"): Opens an image file named icon.png to be used as the favicon (small
icon) for the web page.
st.set_page_config(...): Configures the appearance and layout of the Streamlit page.
• page_title="AI Samvadini": Sets the browser tab title to "AI Samvadini".
• layout="centered": Centers the content of the page.
• page_icon=im: Sets the page icon to the image loaded from icon.png.
lan = st.selectbox("#### Language", ["English", "Comming Soon!"])
if lan == "English":
home_title = "AI Samvadini"
home_introduction = "Welcome to AI Samvadini, empowering your
Podcaste with generative AI."
with st.sidebar:
st.markdown('AI Samvadini - S1.0.0')
st.markdown("""
#### Let's contact:
[KautilyaUtkarsh]
(https://www.linkedin.com/in/kautilya-utkarsh-mishra-
187818265/)
[At C# Corner ]
(https://www.c-sharpcorner.com/members/kautilya-
utkarsh)
#### Product of
[CSharp Corner](https://www.c-sharpcorner.com/)
#### Powered by
[OpenAI](https://openai.com/)
[Langchain](https://github.com/hwchase17/langchain)
""")

https://www.c-sharpcorner.com/ebooks/ 20
st.selectbox(...): Creates a dropdown menu allowing users to select a language. Options
include "English" and a placeholder "Coming Soon!".
if lan == "English": Checks if the language is set to English.
home_title and home_introduction: Defines the title and introductory text for the application in
English.
with st.sidebar: Defines the content of the sidebar:
• st.markdown(...): Uses Markdown to display project information, contact details, and
links to technology providers.
st.markdown(
"<style>#MainMenu{visibility:hidden;}</style>",
unsafe_allow_html=True
)
• st.markdown("<style>#MainMenu{visibility:hidden;}</style>",
unsafe_allow_html=True): Hides the default Streamlit menu using custom CSS.
• st.image(im, width=100): Displays the page icon image (icon3.png) with a width of 100
pixels.
st.markdown(f"""# {home_title} <span style=color:#2E9BF5><font
size=5>C# Corner</font></span>""",unsafe_allow_html=True)
st.markdown("""\n""")
#st.markdown("#### Greetings")
st.markdown("Welcome to AI Samvadini! 👏 AI Samvadini is your
personal podcaste interviewer powered by generative AI that conducts
Podcaste."
" You can upload your resume and enter job profile, and
AI Samvadini will ask you customized questions. Additionally, you can
configure your own Podcast Interviewer!")
st.markdown("""\n""")
st.markdown("#### Get started!")
st.markdown("Select one of the following screens to start your
interview!")

• st.markdown(f"""# {home_title} <span style=color:#2E9BF5><font size=5>C#


Corner</font></span>""", unsafe_allow_html=True): Displays the main title with
custom HTML styling for color and font size.
• st.markdown(...): Provides introductory text explaining the application’s features.
• st.markdown("#### Get started!"): Adds a header prompting users to start using the
application.
• st.markdown("Select one of the following screens to start your interview!"):
Instructs users to choose from available options.
selected = option_menu(
menu_title= None,
options=["Professional", "Resume",
"Behavioral","Customize!"],
icons = ["cast", "cloud-upload", "cast"],
default_index=0,
orientation="horizontal",
)

https://www.c-sharpcorner.com/ebooks/ 21
• option_menu(...): Creates a horizontal navigation menu:
• menu_title=None: No title for the menu.
• options=["Professional", "Resume", "Behavioral", "Customize!"]: Defines menu
items.
• icons=["cast", "cloud-upload", "cast"]: Assigns icons to each menu item.
• default_index=0: Sets the default selected item to the first one.
• orientation="horizontal": Displays menu items horizontally.
if selected == 'Professional':
st.info(""" 📚In this session, the AI Samvadini will assess
your technical skills as they relate to the provided description.
Note: The maximum length of your answer is 4097 tokens!
- Each Interview will take 10 to 15 mins.
- To start a new session, just refresh the page.
- Choose your favorite interaction style (chat/voice)
- Start introduce yourself and enjoy! """)
if st.button("Start Interview!"):
switch_page("Professional Screen")
if selected == 'Resume':
st.info("""
📚In this session, the AI Samvadini will review your resume
and discuss your past experiences.
Note: The maximum length of your answer is 4097 tokens!
- Each Interview will take 10 to 15 mins.
- To start a new session, just refresh the page.
- Choose your favorite interaction style (chat/voice)
- Start introduce yourself and enjoy! """
)
if st.button("Start Interview!"):
switch_page("Resume Screen")
if selected == 'Behavioral':
st.info(""" 📚In this session, the AI Samvadini will assess
your soft skills as they relate to the job description.
Note: The maximum length of your answer is 4097 tokens!
- Each Interview will take 10 to 15 mins.
- To start a new session, just refresh the page.
- Choose your favorite interaction style (chat/voice)
- Start introduce yourself and enjoy!
""")
if st.button("Start Interview!"):
switch_page("Behavioral Screen")
if selected == 'Customize!':
st.info("""
📚In this session, you can customize your own AI Samvadini
and practice with it!
- Configure AI Samvadini in different specialties.
- Configure AI Samvadini in different personalities.
- Different tones of voice.

https://www.c-sharpcorner.com/ebooks/ 22
Coming at the end of July""")

• if selected == 'Professional': Checks if "Professional" is selected.


• st.info(...): Displays information about the professional interview session.
• if st.button("Start Interview!"): Displays a button to start the interview.
• switch_page("Professional Screen"): Navigates to the "Professional Screen" when the
button is clicked.
The same structure is used for other options ("Resume," "Behavioral," "Customize!"), providing
relevant information and buttons to start the corresponding interviews.
with st.expander("Updates"):
st.write("""
07/10/2024
- Fix the error that was occuring on the Behavioral page """)
with st.expander("What's coming next?"):
st.write("""
Improved voice interaction for a seamless experience. """)
• with st.expander("Updates"): Creates a collapsible section labeled "Updates" that
displays recent updates.
• with st.expander("What's coming next?"): Creates another collapsible section labeled
"What's coming next?" showing future improvements.
After completing this you will get an interface as shown in given picture, you can also modify this
according to your needs and comfort. On this homepage we have the option to navigate to the
other three screens and the homepage also has descriptions of other screens like what are the
functions of other pages.

https://www.c-sharpcorner.com/ebooks/ 23
Page/Behavioural.py
As at the homepage we have create options for 3 more screens. Here we will see the behavioral
screen.
In Behavioral Screen, the AI Samvadini will assess your soft skills as they relate to the job
description.
Note: The maximum length of your answer is 4097 tokens!
• Each Interview will take 10 to 15 mins.
• To start a new session, just refresh the page.
• Choose your favorite interaction style (chat/voice)
• Start introduce yourself and enjoy!
import streamlit as st
from streamlit_lottie import st_lottie
from typing import Literal
from dataclasses import dataclass
import json
import base64
from langchain.memory import ConversationBufferMemory
from langchain_community.callbacks.manager import get_openai_callback
from langchain_community.chat_models import ChatOpenAI
from langchain.chains import ConversationChain, RetrievalQA
from langchain.prompts.prompt import PromptTemplate
from langchain.text_splitter import NLTKTextSplitter
#from langchain_openai import OpenAIEmbeddings
from langchain.embeddings import OpenAIEmbeddings
#from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
import nltk
from prompts.prompts import templates
# Audio
from speech_recognition.openai_whisper import save_wav_file, transcribe
from audio_recorder_streamlit import audio_recorder
from aws.synthesize_speech import synthesize_speech
from IPython.display import Audio

• import streamlit as st: Imports Streamlite for building interactive web applications.
• from streamlit_lottie import st_lottie: Imports st_lottie for integrating Lottie animations
into the app.
• from typing import Literal: Imports Literal for type hints in data classes.
• from dataclasses import dataclass: Imports dataclass for creating simple classes to hold
data.
• import json: Imports JSON handling functions.
• import base64: Imports base64 encoding functions for audio handling.
• from langchain.memory import ConversationBufferMemory: Imports
ConversationBufferMemory for managing conversation history.
• from langchain_community.callbacks.manager import get_openai_callback:
Imports get_openai_callback for managing OpenAI API callbacks.

https://www.c-sharpcorner.com/ebooks/ 24
• from langchain_community.chat_models import ChatOpenAI: Imports ChatOpenAI
for creating a chat model.
• from langchain.chains import ConversationChain, RetrievalQA: Imports
ConversationChain and RetrievalQA for managing conversation flow and retrieval-
basedQA.
• from langchain.pr ompts.prompt import PromptTemplate: Imports PromptTemplate
for defining prompt templates.
• from langchain.text_splitter import NLTKTextSplitter: Imports NLTKTextSplitter for
splitting text into chunks.
• from langchain.embeddings import OpenAIEmbeddings: Imports
OpenAIEmbeddings for creating text embeddings.
• from langchain_community.vectorstores import FAISS: Imports FAISS for vector
similarity search.
• import nltk: Imports NLTK library for natural language processing.
• from prompts.prompts import templates: Imports prompt templates from a custom
module.
• from speech_recognition.openai_whisper import save_wav_file, transcribe:
Imports functions for audio recording and transcription.
• from audio_recorder_streamlit import audio_recorder: Imports audio_recorder for
recording audio within Streamlit.
• from aws.synthesize_speech import synthesize_speech: Imports function for text-to-
speech synthesis using AWS.
• from IPython.display import Audio: Imports Audio for playing audio files.
def load_lottiefile(filepath: str):
'''Load lottie animation file'''
with open(filepath, "r") as f:
return json.load(f)
st_lottie(load_lottiefile("images/welcome.json"), speed=1,
reverse=False, loop=True, quality="high", height=300)

• load_lottiefile(filepath: str): Function to load a Lottie animation from a file.


• st_lottie(...): Renders the Lottie animation on the Streamlit page with specified
properties like speed, loop, and quality
with st.expander("""Why did I encounter errors when I tried to talk to
the AI Samvadini?"""):
st.write("""
This is because the app failed to record. Make sure that your
microphone is connected and that you have given permission to the
browser to access your microphone.""")
st.expander(...): Creates a collapsible section explaining common errors related to microphone
access.
jd = st.text_area("""Please enter Your job profile here (If you don't
have one, enter keywords, such as "communication" or "teamwork"
instead): """)
auto_play = st.checkbox("Let AI Samvadini speak! (Please don't switch
during the )")
st.text_area(...): Provides a text area for users to enter their job profile or keywords.

https://www.c-sharpcorner.com/ebooks/ 25
st.checkbox(...): Adds a checkbox to enable or disable auto-play of AI responses.
@dataclass
class Message:
'''dataclass for keeping track of the messages'''
origin: Literal["human", "ai"]
message: str
Message: A data class to represent a message in the conversation, including its origin (human
or AI) and the message content.
def autoplay_audio(file_path: str):
'''Play audio automatically'''
def update_audio():
global global_audio_md
with open(file_path, "rb") as f:
data = f.read()
b64 = base64.b64encode(data).decode()
global_audio_md = f"""
<audio controls autoplay="true">
<source src="data:audio/mp3;base64,{b64}"
type="audio/mp3">
</audio>
"""
def update_markdown(audio_md):
st.markdown(audio_md, unsafe_allow_html=True)
update_audio()
update_markdown(global_audio_md)

• autoplay_audio(file_path: str): The function takes a single parameter, file_path, which


is the path to the audio file you want to play.
• update_audio(): This inner function handles the process of reading the audio file,
encoding it in base64, and generating the HTML code for an audio player.
• global global_audio_md: Declares a global variable global_audio_md to store the
HTML code for the audio player.
• with open(file_path, "rb") as f:: Opens the audio file in binary read mode.
• data = f.read(): Reads the entire audio file into a variable data.
• b64 = base64.b64encode(data).decode(): Encodes the binary data into a base64 string
and decodes it to a UTF-8 string.
• global_audio_md = f"""<audio controls autoplay="true"><source
src="data:audio/mp3;base64,{b64}" type="audio/mp3"></audio>""": Constructs the
HTML code for an audio element with autoplay enabled, using the base64-encoded
audio data.
• update_markdown(audio_md): This inner function takes the HTML code for the audio
player and renders it in the Streamlit app using st.markdown.
• st.markdown(audio_md, unsafe_allow_html=True): Uses Streamlit's st.markdown to
render the HTML. The unsafe_allow_html=True parameter is necessary to allow raw
HTML to be rendered.
• update_audio(): Calls the update_audio function to read the audio file, encode it, and
generate the HTML code for the audio player.

https://www.c-sharpcorner.com/ebooks/ 26
• update_markdown(global_audio_md): Calls the update_markdown function to render
the generated HTML code in the Streamlit app, which will display the audio player and
start playing the audio automatically.
def embeddings(text: str):
'''Create embeddings for the job description'''
nltk.download('punkt')
text_splitter = NLTKTextSplitter()
texts = text_splitter.split_text(text)
# Create emebeddings
embeddings = OpenAIEmbeddings()
docsearch = FAISS.from_texts(texts, embeddings)
retriever = docsearch.as_retriever(search_tupe='similarity search')
return retriever

• embeddings(text: str): The function takes a single parameter, text, which is the text of
the job description for which embeddings need to be created.
• nltk.download('punkt'): This line downloads the 'punkt' package from the Natural
Language Toolkit (NLTK). The 'punkt' package is used for tokenizing text into sentences
and words.
• text_splitter = NLTKTextSplitter(): Initializes an instance of the NLTKTextSplitter class.
This class is used to split the text into smaller chunks.
• texts = text_splitter.split_text(text): Uses the text splitter to divide the input text into
smaller chunks. This is useful for creating embeddings because large texts can be
difficult to handle in one go.
• embeddings = OpenAIEmbeddings(): Initializes an instance of the OpenAIEmbeddings
class. This class is used to create embeddings for the text chunks. Embeddings are
numerical representations of text that capture semantic meaning.
• docsearch = FAISS.from_texts(texts, embeddings): Uses the FAISS library to create
an index from the text chunks and their embeddings. FAISS (Facebook AI Similarity
Search) is a library for efficient similarity search and clustering of dense vectors.
• retriever = docsearch.as_retriever(search_type='similarity search'): Converts the
FAISS index into a retriever object that can be used to perform similarity searches. The
search_type parameter specifies that the retriever should perform similarity searches.
• return retriever: The function returns the retriever object, which can be used to find text
chunks similar to a given query based on their embeddings.

Check and Initialize retriever:


if "retriever" not in st.session_state:
st.session_state.retriever = embeddings(jd)

• Purpose: If the retriever is not already present in the session state, it initializes it using
the embeddings function with the job description (jd).
• Function: embeddings(jd) process the job description to create a retriever object for
similarity searches.

https://www.c-sharpcorner.com/ebooks/ 27
Check and Initialize chain_type_kwargs:
if "chain_type_kwargs" not in st.session_state:
Behavioral_Prompt = PromptTemplate(input_variables=["context",
"question"], template=templates.beh
avioral_template)
st.session_state.chain_type_kwargs = {"prompt":
Behavioral_Prompt}

• Purpose: If chain_type_kwargs is not present in the session state, it initializes it with a


PromptTemplate for behavioral questions.
• Behavioral Prompt: This template defines how questions should be structured based
on the context and the question itself.

Check and Initialize history:


if "history" not in st.session_state:
st.session_state.history = []
st.session_state.history.append(Message("ai", "Hello there! I
am your interviewer today. I will access your soft skills through a
series of questions. Let's get started! Please start by saying hello or
introducing yourself. Note: The maximum length of your answer is 4097
tokens!"))

• Purpose: If history is not present, it initializes it as an empty list and adds a welcome
message from the AI interviewer.
• Interview History: This keeps track of the conversation between the user and the AI.

Check and Initialize token_count:


if "token_count" not in st.session_state:
st.session_state.token_count = 0
Purpose: Initializes token_count to 0 if it is not already present. This variable tracks the number
of tokens used in the conversation.

Check and Initialize memory:


if "memory" not in st.session_state:
st.session_state.memory = ConversationBufferMemory()
Purpose: Initializes memory with a ConversationBufferMemory object if it is not already present.
This memory buffer helps in maintaining the context of the conversation.

Check and Initialize guideline:


if "guideline" not in st.session_state:
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0.8, )
st.session_state.guideline = RetrievalQA.from_chain_type(
llm=llm,
chain_type_kwargs=st.session_state.chain_type_kwargs,
chain_type='stuff',

https://www.c-sharpcorner.com/ebooks/ 28
retriever=st.session_state.retriever,
memory=st.session_state.memory).run(
"Create an interview guideline and prepare total of 8
questions. Make sure the questions tests the soft skills")
If a guideline is not present, it initializes it by creating a RetrievalQA object and running it to
generate interview guidelines.
• A ChatOpenAI object is created with a specified model and temperature.
• A RetrievalQA object is created using the language model, prompt template, retriever,
and memory.
• The guideline is generated by running the RetrievalQA object with the instruction to
create an interview guideline and prepare eight questions focusing on soft skills.
if "conversation" not in st.session_state:
llm = ChatOpenAI(
model_name = "gpt-3.5-turbo",
temperature = 0.8,)
Purpose: If the conversation is not present in the session state, this block initializes it.
LLM Initialization: A ChatOpenAI object is created with the gpt-3.5-turbo model and a
temperature of 0.8. The temperature setting controls the randomness of the model's responses;
higher temperatures result in more varied and creative outputs.
PROMPT = PromptTemplate(
input_variables=["history", "input"],
template="""I want you to act as an interviewer strictly
following the guideline in the current conversation.
Candidate has no idea what the guideline
is.
Ask me questions and wait for my answers.
Do not write explanations.
Ask question like a real person, only one
question at a time.
Do not ask the same question.
Do not repeat the question.
Do ask follow-up questions if necessary.
You name is GPTInterviewer.
I want you to only reply as an interviewer.
Do not write all the conversation at once.
If there is an error, point it out.
Current Conversation:
{history}
Candidate: {input}
AI: """)
• Purpose: This block creates a PromptTemplate for the interviewer. The template
specifies how the interviewer should behave and structure the conversation.

Template Details: The interviewer:


• Follow the guideline strictly.
• Asks questions like a real person, one at a time, without repeating or explaining.

https://www.c-sharpcorner.com/ebooks/ 29
• Only replies as an interviewer without writing the entire conversation at once.
• Points out any errors if they occur.
• Uses the history and input variables to maintain the context of the conversation.
st.session_state.conversation =
ConversationChain(prompt=PROMPT,llm=llm,
memory=st.session_state.memory)
Purpose: This line initializes the conversation in the session state.
Conversation Chain: A ConversationChain object is created using the PROMPT, llm, and
memory. This object manages the conversational flow between the user and the AI interviewer.
if "feedback" not in st.session_state:
llm = ChatOpenAI(
model_name = "gpt-3.5-turbo",
temperature = 0.5,)
Purpose: If the feedback is not present in the session state, this block initializes it.
LLM Initialization: Another ChatOpenAI object is created, but this time with a temperature of
0.5. This lower temperature setting makes the feedback responses more consistent and less
random.
st.session_state.feedback = ConversationChain(
prompt=PromptTemplate(input_variables = ["history",
"input"], template = templates.feedback_template),
llm=llm,
memory = st.session_state.memory,
)
Purpose: This block initializes the feedback in the session state.
Feedback Prompt Template: A PromptTemplate is created using the
templates.feedback_template, which defines how the feedback should be structured based on
the conversation history and user input.
Feedback Chain: A ConversationChain object is created using the feedback prompt template,
llm, and memory. This object handles generating feedback for the interview.

The answer_call_back Function


The answer_call_back function is designed to handle the user's input, process it, and generate a
response using the AI interviewer. It manages both text and voice inputs, integrates with
OpenAI's API for responses, and handles audio playback for synthesized speech. Here's a
detailed breakdown of each part of the function:

Context Manager for OpenAI Callback:


with get_openai_callback() as cb:
Purpose: Initializes the OpenAI callback context to monitor and manage the interaction with
OpenAI's API. This context manager tracks the token usage for billing and performance
purposes.

Handling User Input


human_answer = st.session_state.answer

https://www.c-sharpcorner.com/ebooks/ 30
Purpose: Retrieves the user's input from the session state, where it has been stored.
Transcribing Audio (if voice input is used)
if voice:
save_wav_file("temp/audio.wav", human_answer)
try:
input = transcribe("temp/audio.wav")
# save human_answer to history
except:
st.session_state.history.append(Message("ai", "Sorry, I
didn't get that."))
return "Please try again."
Purpose: If voice input is used, the function saves the audio to a WAV file and attempts to
transcribe it using the transcribe function.
Error Handling: If transcription fails, the function adds an error message to the session history
and returns a prompt to retry.

Text Input Handling:


else:
input = human_answer

st.session_state.history.append(
Message("human", input)
)
Purpose: Adds the user's input to the conversation history in the session state as a Message
object.

Run Conversation Chain:


llm_answer = st.session_state.conversation.run(input)
Purpose: Generates a response from the AI interviewer using the ConversationChain initialized
in the session state.
Speech Synthesis and Playback
audio_file_path = synthesize_speech(llm_answer)
Purpose: Converts the AI's text response into speech and saves it as an audio file.

Create Audio Widget with Autoplay:


audio_widget = Audio(audio_file_path, autoplay=True)
Purpose: Creates an audio widget to play the synthesized speech automatically.

Append AI Message to History:


st.session_state.history.append(
Message("ai", llm_answer)
)
Purpose: Adds the AI's response to the conversation history in the session state as a Message
object.

https://www.c-sharpcorner.com/ebooks/ 31
Token Count Update
st.session_state.token_count += cb.total_tokens
Purpose: Updates the total token count used in the session by adding the tokens consumed
during this interaction.

Return the Audio Widget:


return audio_widget
Purpose: Returns the audio widget to be displayed in the Streamlit app, enabling automatic
playback of the AI's synthesized speech.
if jd:
Purpose: Checks if the user has provided a job description (JD). If jd is not empty, the app will
proceed with initializing the session state and setting up the interview.

Initializing Session State


initialize_session_state()
Purpose: Calls the initialize_session_state function to set up the necessary variables and
objects in the session state, such as the retriever, interview history, conversation chain, and
feedback mechanisms.

Layout and Placeholder Setup


credit_card_placeholder = st.empty()
col1, col2 = st.columns(2)
Purpose: Creates a placeholder for displaying progress and sets up two columns for placing
buttons.

Feedback and Guideline Buttons


with col1:
feedback = st.button("Get Interview Feedback")
with col2:
guideline = st.button("Show me interview guideline!")
Purpose: Adds buttons for getting interview feedback and displaying the interview guideline.
The feedback and guideline variables store the boolean state of the buttons (i.e., whether they
have been clicked).

Setting Up Placeholders
audio = None
chat_placeholder = st.container()
answer_placeholder = st.container()
Purpose: Initializes placeholders for audio playback, chat history, and user answers.

Displaying Interview Guideline


if guideline:
st.write(st.session_state.guideline)

https://www.c-sharpcorner.com/ebooks/ 32
Purpose: If the "Show me interview guideline!" button is clicked, it displays the interview
guideline generated in the session state.

Generating and Displaying Interview Feedback


if feedback:
evaluation = st.session_state.feedback.run("please give
evalution regarding the interview")
st.markdown(evaluation)
st.download_button(label="Download Interview Feedback",
data=evaluation, file_name="interview_feedback.txt")
st.stop()
Purpose: If the "Get Interview Feedback" button is clicked, it generates feedback based on the
interview, displays it, and provides a button to download the feedback as a text file. It then stops
further execution to avoid overlapping interactions.
Handling User Input
else:
with answer_placeholder:
voice: bool = st.checkbox("I would like to speak with AI
Samvadini!")
if voice:
answer = audio_recorder(pause_threshold=2.5,
sample_rate=44100)
#st.warning("An UnboundLocalError will occur if the
microphone fails to record.")
else:
answer = st.chat_input("Your answer")
if answer:
st.session_state['answer'] = answer
audio = answer_call_back()
Purpose: Provides a checkbox for the user to choose between voice and text input. If a voice is
selected, it uses audio_recorder to capture the user's speech. If text is selected, it uses
st.chat_input to capture text input. Upon receiving an answer, it stores the input in the session
state and processes it through the answer_call_back function.

Displaying Chat History


with chat_placeholder:
for answer in st.session_state.history:
if answer.origin == 'ai':
if auto_play and audio:
with st.chat_message("assistant"):
st.write(answer.message)
st.write(audio)
else:
with st.chat_message("assistant"):
st.write(answer.message)
else:
with st.chat_message("user"):

https://www.c-sharpcorner.com/ebooks/ 33
st.write(answer.message)
Purpose: Iterates through the conversation history stored in the session state and displays each
message. If the message is from the AI and auto_play is enabled, it also plays the synthesized
audio response.

Displaying Progress
credit_card_placeholder.caption(f"""
Progress: {int(len(st.session_state.history) /
30 * 100)}% completed.
""")
Purpose: Displays the progress of the interview as a percentage, based on the number of
messages in the conversation history.

Prompt for Job Description


else:
st.info("Please submit job description to start interview.")
Purpose: If no job description is provided, it prompts the user to submit one to start the
interview.
The outcome of the following code is shown in given figure:

Page/Professional.py
In this Professional Screen, the AI Samvadini will assess your technical skills as they relate to
the proived description. Note: The maximum length of your answer is 4097 tokens!
• Each Interview will take 10 to 15 mins.
• To start a new session, just refresh the page.
• Choose your favorite interaction style (chat/voice)
• Start introduce yourself and enjoy!
import streamlit as st
from streamlit_lottie import st_lottie

https://www.c-sharpcorner.com/ebooks/ 34
from typing import Literal
from dataclasses import dataclass
import json
import base64
from langchain.memory import ConversationBufferMemory
from langchain_community.callbacks import get_openai_callback
from langchain_community.chat_models import ChatOpenAI
from langchain.chains import ConversationChain, RetrievalQA
from langchain.prompts.prompt import PromptTemplate
from langchain.text_splitter import NLTKTextSplitter
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
import nltk
from prompts.prompts import templates
from speech_recognition.openai_whisper import save_wav_file, transcribe
from audio_recorder_streamlit import audio_recorder
from aws.synthesize_speech import synthesize_speech
from IPython.display import Audio
We are using the same libraries as we have already used in Behavioral Screen, so for
understanding the use of these libraries you can refer the Behavioral Screen code explanation.
def load_lottiefile(filepath: str):
with open(filepath, "r") as f:
return json.load(f)
st_lottie(load_lottiefile("images/welcome.json"), speed=1,
reverse=False, loop=True, quality="high", height=300)

#st.markdown("""solutions to potential errors:""")


with st.expander("""Why did I encounter errors when I tried to talk to
the AI Samvadini?"""):
st.write("""
This is because the app failed to record. Make sure that your
microphone is connected and that you have given permission to the
browser to access your microphone.""")

jd = st.text_area("Please enter the job description here (If you don't


have one, enter keywords, such as PostgreSQL or Python instead): ")
auto_play = st.checkbox("Let AI Samvadini speak! (Please don't switch
during the interview)")
load_lottiefile(filepath): Loads and returns the content of a Lottie animation file.
st_lottie(...): Displays the Lottie animation on the Streamlit app with specified properties.
Error Expander: Provides troubleshooting information in an expandable section.
Job Description Text Area: Allows users to enter a job description or keywords.
Auto-Play Checkbox: Lets users choose whether the AI responses should be automatically
played.
This setup ensures a welcoming and informative interface for users, guiding them through
potential issues and capturing necessary input for the interview process.

https://www.c-sharpcorner.com/ebooks/ 35
@dataclass
class Message:
"""class for keeping track of interview history."""
origin: Literal["human", "ai"]
message: str
The Message class is a structured way to track messages during the interview process. Each
instance of Message contains:
origin: Specifies whether the message came from the human user or the AI.
message: Contains the text of the message itself.
By using this class, you can easily manage and display the conversation history, differentiate
between user and AI messages, and maintain a clear record of the interaction.
def save_vector(text):
"""embeddings"""
nltk.download('punkt')
text_splitter = NLTKTextSplitter()
texts = text_splitter.split_text(text)
# Create emebeddings
embeddings = OpenAIEmbeddings()
docsearch = FAISS.from_texts(texts, embeddings)
return docsearch
The save_vector function processes a given text to create vector embeddings and stores them
in a FAISS index. It performs the following steps:
• Downloads necessary NLTK models for text processing.
• Splits the text into smaller chunks.
• Creates vector embeddings for these text chunks using OpenAI's embedding model.
• Stores these embeddings in a FAISS index for efficient similarity search.
This function is useful for applications that involve searching or comparing large amounts of text
data based on their semantic content.
def initialize_session_state_jd():
""" initialize session states """
if 'jd_docsearch' not in st.session_state:
st.session_state.jd_docserch = save_vector(jd)
if 'jd_retriever' not in st.session_state:
st.session_state.jd_retriever =
st.session_state.jd_docserch.as_retriever(search_type="similarity")
if 'jd_chain_type_kwargs' not in st.session_state:
Interview_Prompt = PromptTemplate(input_variables=["context",
"question"],
template=templates.jd_templat
e)
st.session_state.jd_chain_type_kwargs = {"prompt":
Interview_Prompt}
if 'jd_memory' not in st.session_state:
st.session_state.jd_memory = ConversationBufferMemory()
# interview history
if "jd_history" not in st.session_state:

https://www.c-sharpcorner.com/ebooks/ 36
st.session_state.jd_history = []
st.session_state.jd_history.append(Message("ai",
"Hello, Welcome to
the interview. I am your interviewer today. I will ask you professional
questions regarding the job description you submitted."
"Please start by
introducting a little bit about yourself. Note: The maximum length of
your answer is 4097 tokens!"))
The initialize_session_state_jd function is designed to set up and manage various session state
variables related to job description (JD) processing and interview handling in a Streamlit
application. Here’s a detailed explanation of each part:

Function Definition
def initialize_session_state_jd():
""" initialize session states """
Purpose:
This function ensures that all necessary session state variables are initialized for managing job
description-based interviews.

Detailed Breakdown
Initialize JD Document Search Object
if 'jd_docsearch' not in st.session_state:
st.session_state.jd_docserch = save_vector(jd)
Purpose: Checks if jd_docsearch is already in st.session_state.
If not, it calls the save_vector function with the job description (jd). This function processes the
job description to create an FAISS index, which is then stored in the session state as
jd_docserch.

Initialize JD Retriever
if 'jd_retriever' not in st.session_state:
st.session_state.jd_retriever =
st.session_state.jd_docserch.as_retriever(search_type="similarity")
Purpose: Checks if jd_retriever is already in st.session_state.
If not, it creates a retriever from the FAISS index (jd_docsearch) using similarity search and
stores it in jd_retriever. This retriever will be used to fetch similar documents or text chunks
based on the job description.

Initialize JD Chain Type Keywords


if 'jd_chain_type_kwargs' not in st.session_state:
Interview_Prompt = PromptTemplate(input_variables=["context",
"question"],
template=templates.jd_templat
e)
st.session_state.jd_chain_type_kwargs = {"prompt":
Interview_Prompt}

https://www.c-sharpcorner.com/ebooks/ 37
Purpose: Checks if jd_chain_type_kwargs is already in st.session_state.
If not, it initializes it with a PromptTemplate object using jd_template from templates. This
prompt template will be used to format prompts for generating questions related to the job
description.

Initialize JD Memory
if 'jd_memory' not in st.session_state:
st.session_state.jd_memory = ConversationBufferMemory()
Purpose: Checks if jd_memory is already in st.session_state.
If not, it initializes jd_memory as an instance of ConversationBufferMemory, which will be used
to keep track of the conversation history during the interview.

Initialize JD Interview History


if "jd_history" not in st.session_state:
st.session_state.jd_history = []
st.session_state.jd_history.append(Message("ai",
"Hello, Welcome to
the interview. I am your interviewer today. I will ask you professional
questions regarding the job description you submitted."
"Please start by
introducting a little bit about yourself. Note: The maximum length of
your answer is 4097 tokens!"))
Purpose: Checks if jd_history is already in st.session_state.
If not, it initializes jd_history as an empty list and adds a welcome message from the AI to start
the interview process. This message provides instructions for the interview.

Initialize Token Count


if "token_count" not in st.session_state:
st.session_state.token_count = 0
Purpose: Checks if token_count is already in st.session_state.
If not, it initializes token_count to 0. This variable keeps track of the number of tokens used in
the conversation.

Initialize JD Guideline
if "jd_guideline" not in st.session_state:
llm = ChatOpenAI(
model_name = "gpt-3.5-turbo",
temperature = 0.8,)
st.session_state.jd_guideline = RetrievalQA.from_chain_type(
llm=llm,
chain_type_kwargs=st.session_state.jd_chain_type_kwargs,
chain_type='stuff',
retriever=st.session_state.jd_retriever, memory =
st.session_state.jd_memory).run("Create an interview guideline and

https://www.c-sharpcorner.com/ebooks/ 38
prepare only one questions for each topic. Make sure the questions
tests the technical knowledge")
Purpose: Checks if jd_guideline is already in st.session_state.
If not, it creates a ChatOpenAI instance for generating responses.
Initializes jd_guideline by using RetrievalQA to generate an interview guideline with one
question for each topic based on the job description. This uses the retriever, prompt template,
and memory.

Initialize JD Screen
if "jd_screen" not in st.session_state:
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0.8, )
PROMPT = PromptTemplate(
input_variables=["history", "input"],
template="""I want you to act as an interviewer strictly
following the guideline in the current conversation.
Candidate has no idea what the guideline
is.
Ask me questions and wait for my answers.
Do not write explanations.
Ask question like a real person, only one
question at a time.
Do not ask the same question.
Do not repeat the question.
Do ask follow-up questions if necessary.
You name is GPTInterviewer.
I want you to only reply as an interviewer.
Do not write all the conversation at once.
If there is an error, point it out.
Current Conversation:
{history}
Candidate: {input}
AI: """)
st.session_state.jd_screen = ConversationChain(prompt=PROMPT,
llm=llm,
memory=st.session_state.jd_memory)
This code initializes the jd_screen session state, which is responsible for managing the
conversation between the AI interviewer and the candidate.
Condition Check: The code first checks if jd_screen is not already present in the
st.session_state.
Model Initialization: If jd_screen is not present, it initializes a new instance of ChatOpenAI with
the specified model (gpt-3.5-turbo) and temperature setting (0.8). The temperature parameter
controls the randomness of the model's responses, with higher values producing more varied
responses.
Prompt Template: Creates a PromptTemplate object that defines the structure and instructions
for the conversation.

https://www.c-sharpcorner.com/ebooks/ 39
input_variables: Specifies the variables history and input that will be replaced with the actual
conversation history and the candidate's input during runtime.
template: Provides detailed instructions for the AI on how to conduct the interview. This
includes:
• Acting strictly as an interviewer and following the guideline.
• Not providing explanations or writing all the conversation at once.
• Asking questions naturally, one at a time, without repetition.
• Using the name "GPTInterviewer".
• Pointing out errors if any occur.
• Incorporating the conversation history and the candidate's latest input into the prompt.
ConversationChain Initialization: Finally, it creates a ConversationChain object using the
PROMPT and llm (language model) along with st.session_state.jd_memory.
prompt: Uses the PROMPT template defined earlier.
llm: Uses the initialized ChatOpenAI model.
memory: Uses st.session_state.jd_memory to keep track of the conversation history.
Assignment to Session State: The initialized ConversationChain is then assigned to
st.session_state.jd_screen, making it available for use in the application.

Initialize JD Feedback
if 'jd_feedback' not in st.session_state:
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0.8, )
st.session_state.jd_feedback = ConversationChain(
prompt=PromptTemplate(input_variables=["history", "input"],
template=templates.feedback_template),
llm=llm,
memory=st.session_state.jd_memory,
)

• Purpose: This code initializes the jd_feedback session state, which is used to provide
feedback on the candidate's interview responses.
• Condition Check: The code first checks if jd_feedback is not already present in the
st.session_state.
• Model Initialization: If jd_feedback is not present, it initializes a new instance of
ChatOpenAI with the specified model (gpt-3.5-turbo) and temperature setting (0.8). The
temperature parameter controls the randomness of the model's responses, with higher
values producing more varied responses.
• ConversationChain Initialization: This creates a ConversationChain object for handling
the feedback process.
• prompt: Uses a PromptTemplate object with specific instructions for generating
feedback.
• input_variables: Specifies the variables history and input that will be replaced with the
actual conversation history and the candidate's input during runtime.
• template: Uses the templates.feedback_template which contains the detailed
instructions for providing feedback on the interview. This template would include

https://www.c-sharpcorner.com/ebooks/ 40
instructions to the AI on how to analyze and provide constructive feedback based on the
interview responses.
• memory: Uses st.session_state.jd_memory to keep track of the conversation history,
ensuring the feedback is relevant and contextual.
• Assignment to Session State: The initialized ConversationChain for feedback is then
assigned to st.session_state.jd_feedback, making it available for use in the application.

Check if jd_feedback is in st.session_state:


if 'jd_feedback' not in st.session_state:
Purpose: This line checks whether the jd_feedback key already exists in the st.session_state. If
it does not exist, the code inside the if block will execute to initialize it.

Initialize the Language Model:


llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0.8, )

• Purpose: This initializes a new instance of the ChatOpenAI class with the specified
model (gpt-3.5-turbo) and a temperature setting of 0.8.
• Parameters: model_name: Specifies the model to be used, in this case, "gpt-3.5-turbo".
• temperature: Controls the randomness of the model's responses. A higher value (closer
to 1) makes the output more random, while a lower value (closer to 0) makes it more
deterministic.

Create the ConversationChain for Feedback:


st.session_state.jd_feedback = ConversationChain(
prompt=PromptTemplate(input_variables=["history", "input"],
template=templates.feedback_template),
llm=llm,
memory=st.session_state.jd_memory,
)
Purpose: This line creates a ConversationChain object which will handle the feedback process.
Parameters:
• prompt: Uses a PromptTemplate object that includes the structure and instructions for
generating feedback.
• input_variables: Specifies the variables history and input which will be used in the
prompt template. These variables will be replaced with the actual conversation history
and the candidate's input during runtime.
• template: Refers to templates.feedback_template, which contains the specific
instructions and structure for the feedback prompt. This template guides the language
model on how to analyze and provide constructive feedback based on the interview
responses.
• llm: The initialized language model (ChatOpenAI with gpt-3.5-turbo).
• memory: Uses st.session_state.jd_memory to maintain the context of the conversation.
This ensures that the feedback is coherent and takes into account the entire interview
history.

https://www.c-sharpcorner.com/ebooks/ 41
The answer_call_back Function
The answer_call_back function handles user input, processes it, and generates responses using
the OpenAI language model. It supports both text and voice input and maintains the
conversation history.

Context Manager for OpenAI Callback:


with get_openai_callback() as cb:
Purpose: This sets up a context manager to capture and manage the usage of OpenAI tokens
during the execution of the callback. cb will track the total tokens used.

Retrieve User Input:


human_answer = st.session_state.answer
Purpose: Retrieves the user's input from the session state.
Transcribe Audio Input (if applicable):
if voice:
save_wav_file("temp/audio.wav", human_answer)
try:
input = transcribe("temp/audio.wav")
# save human_answer to history
except:
st.session_state.jd_history.append(Message("ai",
"Sorry, I didn't get that."))
return "Please try again."
else:
input = human_answer
Purpose: Checks if the input is voice-based. If it is, the function saves the audio input to a WAV
file and attempts to transcribe it. If transcription fails, it adds an error message to the history and
prompts the user to try again.

Update Conversation History with User Input:


st.session_state.jd_history.append(
Message("human", input)
)
Purpose: Adds the user's input to the conversation history, marking it as originating from the
"human."

Generate Response Using OpenAI:


llm_answer = st.session_state.jd_screen.run(input)
Purpose: Passes the user's input to the OpenAI language model (jd_screen), which generates
a response based on the input and the conversation history.
Synthesize Speech from Response:
audio_file_path = synthesize_speech(llm_answer)
Purpose: Converts the generated response from text to speech and saves it as an audio file.

https://www.c-sharpcorner.com/ebooks/ 42
Create Audio Widget with Autoplay:
audio_widget = Audio(audio_file_path, autoplay=True)
Purpose: Creates an audio widget that automatically plays the synthesized speech. This widget
will be returned to the user interface for playback.

Update Conversation History with AI Response:


st.session_state.jd_history.append(
Message("ai", llm_answer)
)
Purpose: Adds the AI-generated response to the conversation history, marking it as originating
from the "ai."

Update Token Count:


st.session_state.token_count += cb.total_tokens
Purpose: Increments the session's token count by the total number of tokens used during this
callback, as tracked by cb.

Return the Audio Widget:


return audio_widget
Purpose: Returns the audio widget to be displayed in the user interface, allowing the user to
hear the AI's response.

Check if Job Description (jd) is Provided:


if jd:
Purpose: Ensures that the rest of the code executes only if a job description is provided.
Initialize Session States:
initialize_session_state_jd()
Purpose: Calls the function to initialize session states related to the job description, interview
guidelines, conversation history, and feedback mechanism.

Create a Placeholder for Credit Card Information:


credit_card_placeholder = st.empty()
Purpose: Creates an empty placeholder for displaying progress or any other information related
to the interview process.

Create Two Columns for Buttons:


col1, col2 = st.columns(2)
with col1:
feedback = st.button("Get Interview Feedback")
with col2:
guideline = st.button("Show me interview guideline!")

https://www.c-sharpcorner.com/ebooks/ 43
Purpose: Creates two columns for organizing buttons. In the first column (col1), a button for
getting interview feedback is added. In the second column (col2), a button for showing the
interview guideline is added.

Create Placeholders for Chat and Answers:


chat_placeholder = st.container()
answer_placeholder = st.container()
audio = None
Purpose:
chat_placeholder: Container for displaying the chat history between the user and the AI.
answer_placeholder: Container for capturing and displaying the user's answers.
audio: Initialized to None and will later hold the audio widget if the user interacts via voice.
if guideline:
st.write(st.session_state.jd_guideline)
if feedback:
evaluation = st.session_state.jd_feedback.run("please give
evalution regarding the interview")
st.markdown(evaluation)
st.download_button(label="Download Interview Feedback",
data=evaluation, file_name="interview_feedback.txt")
st.stop()
Display Interview Guideline: Shows the interview guideline if the respective button is clicked.
Generate and Display Feedback: Generates feedback based on the interview and displays it if
the respective button is clicked.
Download Feedback: Provides an option to download the feedback as a text file.
Halt Execution: Stops further execution of the script when feedback is generated to ensure
proper handling of the results.

Voice Interaction Setup


Voice Checkbox:
voice: bool = st.checkbox("I would like to speak with AI Samvadini")
Purpose: Displays a checkbox allowing users to choose whether they want to interact with the
AI via voice.

Voice Input Handling:


if voice:
answer = audio_recorder(pause_threshold = 2.5, sample_rate =
44100)
Purpose: If the checkbox is checked, it activates the audio_recorder function to record the
user’s voice input.

https://www.c-sharpcorner.com/ebooks/ 44
Text Input Handling:
else:
answer = st.chat_input("Your answer")
Purpose: If the checkbox is not checked, it provides a text input field for users to type their
answers.

Process and Save Answer:


if answer:
st.session_state['answer'] = answer
audio = answer_call_back()
Purpose: If an answer is provided (either by voice or text), it is saved to the session state, and
the answer_call_back function is called to process the answer and generate a response from
the AI.

Display Chat History:


with chat_placeholder:
for answer in st.session_state.jd_history:
if answer.origin == 'ai':
if auto_play and audio:
with st.chat_message("assistant"):
st.write(answer.message)
st.write(audio)
else:
with st.chat_message("assistant"):
st.write(answer.message)
else:
with st.chat_message("user"):
st.write(answer.message)
Purpose: Iterates through the interview history and displays the chat messages:
AI Responses: If the message is from the AI and auto_play is enabled, it displays the message
and plays the audio response.
User Responses: Displays messages from the user.

Display Progress:
credit_card_placeholder.caption(f"""
Progress: {int(len(st.session_state.jd_history) / 30 * 100)}%
completed.""")
Purpose: Shows a progress indicator based on the number of messages in the interview
history. It calculates the completion percentage and updates the caption in the
credit_card_placeholder.

Inform User if Job Description is Missing:


else:
st.info("Please submit a job description to start the interview.")

https://www.c-sharpcorner.com/ebooks/ 45
Purpose: If no job description (jd) is provided, it displays an informational message prompting
the user to submit one before starting the interview.
The outcome of the following code will be like this: Figure

Page/Resume Screen.py
In this Resume Screen, the AI Samvadini will review your resume and discuss your past
experiences.
Note: The maximum length of your answer is 4097 tokens!
• Each Interview will take 10 to 15 mins.
• To start a new session, just refresh the page.
• Choose your favorite interaction style (chat/voice)
• Start introducing yourself and enjoy
from dataclasses import dataclass
import streamlit as st
from speech_recognition.openai_whisper import save_wav_file, transcribe
from audio_recorder_streamlit import audio_recorder
from langchain_community.callbacks import get_openai_callback
from langchain_community.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import RetrievalQA, ConversationChain
from longchain.prompts.prompt import PromptTemplate
from prompts.prompts import templates
from typing import Literal
from aws.synthesize_speech import synthesize_speech
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.text_splitter import NLTKTextSplitter
from PyPDF2 import PdfReader
from prompts.prompt_selector import prompt_sector

https://www.c-sharpcorner.com/ebooks/ 46
from streamlit_lottie import st_lottie
import json
from IPython.display
import Audio
import nltk
The explanation for importing these libraries is mentioned in Behavioral Screen code
explanation because we have imported the same library here.
def load_lottiefile(filepath: str):
with open(filepath, "r") as f:
return json.load(f)
st_lottie(load_lottiefile("images/welcome.json"), speed=1,
reverse=False, loop=True, quality="high", height=300)
Loading a Lottie Animation File: The load_lottiefile function is defined to load a Lottie
animation file from the specified file path and return its JSON content.
Displaying the Lottie Animation: The st_lottie function is used to display the Lottie animation
in the Streamlit app with specified playback settings and dimensions. The animation is loaded
from the file "images/welcome.json", plays at normal speed, does not play in reverse, loops
indefinitely, and is displayed with high quality at a height of 300 pixels.
with st.expander("""Why did I encounter errors when I tried to talk to
the AI Samvadini?"""):
st.write("""This is because the app failed to record. Make sure
that your microphone is connected and that you have given permission to
the browser to access your microphone.""")
with st.expander("""Why did I encounter errors when I tried to upload
my resume?"""):
st.write("""
Please make sure your resume is in pdf format. More formats will be
supported in the future.
""")
Expandable Sections: The st.expander function is used to create sections that can be
expanded or collapsed by the user.
Explanatory Texts: Inside each st.expander, the st.write function is used to display
explanations for potential errors the users might face.
Audio Recording Errors: Explains the possible reasons for errors when talking to the AI, such
as microphone connectivity issues or browser permissions.
Resume Upload Errors: Advises users to ensure their resume is in PDF format and mentions
future support for more formats.
These expandable sections help in providing a cleaner user interface by hiding the explanations
until the user chooses to view them, making the main interface less cluttered.
st.markdown("""\n""")
position = st.selectbox("Select the position you are applying for",
["Data Analyst", "Software Engineer", "Marketing"])
resume = st.file_uploader("Upload your resume", type=["pdf"])
auto_play = st.checkbox("Let AI Samvadini speak! (Please don't switch
during the interview)")

https://www.c-sharpcorner.com/ebooks/ 47
Markdown Line Break: Adds spacing for better layout.
Position Selection Dropdown: Allows users to select the position they are applying for from a
predefined list.
Resume Upload: Enables users to upload their resume in PDF format.
Checkbox for AI Audio Playback: Provides an option to enable or disable AI Samvadini's
audio playback during the interview.
These components enhance the interactivity and user experience of the Streamlit application by
allowing users to select positions, upload resumes, and control audio playback.
@dataclass
class Message:
"""Class for keeping track of interview history."""
origin: Literal["human", "ai"]
message: str
The Message class is a simple data structure designed to keep track of messages exchanged
during an interview. By using the @dataclass decorator, the class is concise and easy to use,
providing a clear structure for storing the origin and content of each message.

Download NLTK punkt Tokenizer:


def save_vector(resume):
"""embeddings"""
nltk.download('punkt')
This ensures that the punkt tokenizer, which is used for splitting text into sentences, is available.

Read the PDF File:


pdf_reader = PdfReader(resume)
text = ""
for page in pdf_reader.pages:
text += page.extract_text()
The PdfReader object reads the PDF file.The text from each page of the PDF is extracted and
concatenated into a single string.

Split Text into Chunks:


# Split the document into chunks
text_splitter = NLTKTextSplitter()
texts = text_splitter.split_text(text)
The NLTKTextSplitter is used to split the extracted text into smaller chunks. This is useful for
processing large documents and creating embeddings for each chunk.

Create Embeddings:
embeddings = OpenAIEmbeddings()
docsearch = FAISS.from_texts(texts, embeddings)
OpenAIEmbeddings is used to generate embeddings for the text chunks.
FAISS.from_texts creates a FAISS index from the text chunks and their embeddings, enabling
efficient similarity search.

https://www.c-sharpcorner.com/ebooks/ 48
Return Value:
return docsearch
The function returns the FAISS index (docsearch), which can be used to search for text within
the resume based on similarity to a query.

Convert Resume to Embeddings:


if 'docsearch' not in st.session_state:
st.session_state.docserch = save_vector(resume)
If the embeddings for the resume are not already in the session state, the save_vector function
is called to convert the resume PDF into a searchable vector space using embeddings.

Retriever for Resume Screen:


if 'retriever' not in st.session_state:
st.session_state.retriever =
st.session_state.docserch.as_retriever(search_type="similarity")
A retriever is initialized using the embeddings, enabling similarity-based searches.

Prompt for Retrieving Information:


if 'chain_type_kwargs' not in st.session_state:
st.session_state.chain_type_kwargs = prompt_sector(position,
templates)
This initializes the prompt template for retrieving information based on the position and
predefined templates.

Interview History:
if "resume_history" not in st.session_state:
st.session_state.resume_history = []
st.session_state.resume_history.append(Message(origin="ai",
message="Hello, I am your interivewer today. I will ask you some
questions regarding your resume and your experience. Please start by
saying hello or introducing yourself. Note: The maximum length of your
answer is 4097 tokens!"))
Initializes the interview history with a welcome message from the AI interviewer.

Token Count:
if "token_count" not in st.session_state:
st.session_state.token_count = 0
Initializes a counter to keep track of the number of tokens used in the conversation.

Memory Buffer for Resume Screen:


if "resume_memory" not in st.session_state:
st.session_state.resume_memory =
ConversationBufferMemory(human_prefix = "Candidate: ", ai_prefix =
"Interviewer")

https://www.c-sharpcorner.com/ebooks/ 49
Initializes a memory buffer to keep track of the conversation history with specific prefixes for
human and AI responses.

Guideline for Resume Screen:


if "resume_guideline" not in st.session_state:
llm = ChatOpenAI(
model_name = "gpt-3.5-turbo",
temperature = 0.5,)

st.session_state.resume_guideline =
RetrievalQA.from_chain_type(
llm=llm,
chain_type_kwargs=st.session_state.chain_type_kwargs,
chain_type='stuff',
retriever=st.session_state.retriever, memory =
st.session_state.resume_memory).run("Create an interview guideline and
prepare only two questions for each topic. Make sure the questions
tests the knowledge")
Generates an interview guideline using the specified LLM (Large Language Model) and the
retriever. The guideline includes only two questions per topic.

LLM Chain for Resume Screen:


if "resume_screen" not in st.session_state:
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0.7, )

PROMPT = PromptTemplate(
input_variables=["history", "input"],
template= """I want you to act as an interviewer strictly
following the guideline in the current conversation.

Ask me questions and wait for my answers like a human. Do


not write explanations.
Candidate has no assess to the guideline.
Only ask one question at a time.
Do ask follow-up questions if you think it's necessary.
Do not ask the same question.
Do not repeat the question.
Candidate has no assess to the guideline.
You name is GPTInterviewer.
I want you to only reply as an interviewer.
Do not write all the conversation at once.
Candiate has no assess to the guideline.
Current Conversation:
{history}
Candidate: {input}
AI: """)

https://www.c-sharpcorner.com/ebooks/ 50
st.session_state.resume_screen
= ConversationChain(prompt=PROMPT, llm = llm, memory =
st.session_state.resume_memory)
Sets up the main interview conversation chain with a prompt template guiding the AI interviewer
on how to conduct the interview, ensuring a natural flow of questions and answers.

LLM Chain for Generating Feedback:


if "resume_feedback" not in st.session_state:
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0.5,)
st.session_state.resume_feedback = ConversationChain(
prompt=PromptTemplate(input_variables=["history","input"],
template=templates.feedback_template),
llm=llm,
memory=st.session_state.resume_memory,
)
Initializes a separate conversation chain for generating feedback on the interview based on the
conversation history.

Context Management:
def answer_call_back():
with get_openai_callback() as cb:
This line uses a context manager to handle interactions with the OpenAI API, ensuring that the
callback for tracking usage (like token count) is managed properly. The cb object tracks token
usage during the API call.

Retrieve User Input:


human_answer = st.session_state.answer
Retrieves the user's input from the Streamlit session state, where it was previously stored.

Handle Voice Input:


if voice:
save_wav_file("temp/audio.wav", human_answer)
try:
input = transcribe("temp/audio.wav")
# save human_answer to history
except:
st.session_state.resume_history.append(Message("ai",
"Sorry, I didn't get that."))
return "Please try again."
else:
input = human_answer

https://www.c-sharpcorner.com/ebooks/ 51
Voice Input Handling:
If the voice flag is True, it indicates that the input was provided via voice. The function saves the
recorded audio to a file ("temp/audio.wav").
It then attempts to transcribe the audio file into text using the transcribe function.
If transcription fails (e.g., due to poor audio quality or other issues), it catches the exception,
appends an error message to the history, and returns a prompt for the user to try again.
Text Input Handling: If the input was not via voice, it simply uses the human_answer directly.

Update Interview History:


st.session_state.resume_history.append(
Message("human", input)
)
Appends the user's input to the interview history, labeling it as coming from the human.

Generate AI Response:
llm_answer = st.session_state.resume_screen.run(input)
Calls the resume_screen conversation chain (initialized elsewhere in the code) to generate a
response from the AI based on the user's input.

Convert AI Response to Speech:


audio_file_path = synthesize_speech(llm_answer)
Converts the AI’s text response into speech using the synthesize_speech function, which
returns the path to the audio file.

Create Audio Widget:


audio_widget = Audio(audio_file_path, autoplay=True)
Creates an audio widget for playing the synthesized speech, with autoplay enabled so that the
audio starts playing automatically.

Update Interview History with AI Response:


st.session_state.resume_history.append(
Message("ai", llm_answer)
)
Appends the AI’s response to the interview history, labeling it as coming from the AI.

Track Token Usage:


st.session_state.token_count += cb.total_tokens
Updates the total token count in the session state by adding the tokens used in the API call
(tracked by cb).

Return Audio Widget:


return audio_widget

https://www.c-sharpcorner.com/ebooks/ 52
Returns the audio widget to be displayed in the Streamlit app, allowing users to hear the AI’s
response.

Context and Initialization


if position and resume:
# intialize session state
initialize_session_state_resume()
Condition Check: The code block executes only if both position and resume are provided.
Initialization: Calls initialize_session_state_resume() to set up the session state with the
necessary components for processing the resume and conducting the interview.

UI Setup
credit_card_placeholder = st.empty()
col1, col2 = st.columns(2)
with col1:
feedback = st.button("Get Interview Feedback")
with col2:
guideline = st.button("Show me interview guideline!")
chat_placeholder = st.container()
answer_placeholder = st.container()
audio = None

Placeholders and Layout:


• credit_card_placeholder: Empty placeholder for progress display.
• col1, col2: Two columns created to place buttons for getting feedback and showing
guidelines.
• chat_placeholder and answer_placeholder: Containers for displaying the chat history
and handling user input.
• audio: Initialized to None to later store audio responses.

Buttons Handling
if guideline:
st.markdown(st.session_state.resume_guideline)
if feedback:
evaluation = st.session_state.resume_feedback.run("please give
evalution regarding the interview")
st.markdown(evaluation)
st.download_button(label="Download Interview Feedback",
data=evaluation, file_name="interview_feedback.txt")
st.stop()

guideline Button:
If the "Show me interview guideline!" button is pressed, it displays the interview guideline from
the session state.

https://www.c-sharpcorner.com/ebooks/ 53
feedback Button:
• If the "Get Interview Feedback" button is pressed:
• Calls the resume_feedback chain to get feedback on the interview.
• Displays the feedback as markdown.
• Provides a download button to save the feedback as a text file.
• Stops further code execution in this block to prevent additional processing.

Handling User Input


else:
with answer_placeholder:
voice: bool = st.checkbox("I would like to speak with AI
Samvadini!")
if voice:
answer = audio_recorder(pause_threshold=2,
sample_rate=44100)
#st.warning("An UnboundLocalError will occur if the
microphone fails to record.")
else:
answer = st.chat_input("Your answer")
if answer:
st.session_state['answer'] = answer
audio = answer_call_back()

Voice or Text Input:


• Voice Input: If the checkbox for speaking with the AI is checked, it records audio input
using audio_recorder with a pause threshold and sample rate specified.
• Text Input: If not using voice, it provides a text input field for the user to type their
answer.
• Process Input: Saves the user's answer to the session state and processes it using
answer_call_back() to generate and handle the AI's response.

Displaying Chat History


with chat_placeholder:
for answer in st.session_state.resume_history:
if answer.origin == 'ai':
if auto_play and audio:
with st.chat_message("assistant"):
st.write(answer.message)
st.write(audio)
else:
with st.chat_message("assistant"):
st.write(answer.message)
else:
with st.chat_message("user"):
st.write(answer.message)

https://www.c-sharpcorner.com/ebooks/ 54
Chat History Display
• Iterates through the interview history stored in resume_history.
• Displays AI responses (origin == 'ai') with optional audio playback if auto_play is
enabled.
• Displays user responses (origin == 'human') as text messages.

Progress Display
credit_card_placeholder.caption(f"""
Progress:
{int(len(st.session_state.resume_history) / 30 * 100)}% completed.""")
Progress Calculation: Updates the progress caption to reflect how much of the interview
history has been processed, assuming each chunk represents a certain percentage of
completion.
The outcome of the following code is as follows:

Initialization.py
import streamlit as st
Imports Streamlit, a library for creating interactive web applications in Python. It allows you to
build user interfaces, such as buttons, text inputs, and containers for displaying data.
from langchain.embeddings import OpenAIEmbeddings
Imports OpenAIEmbeddings from LangChain, which is used to create embeddings (vector
representations) of text using OpenAI's models.
from langchain.vectorstores import FAISS
Imports FAISS from LangChain, which is a library for efficient similarity search and clustering of
embeddings. It allows you to perform fast similarity searches over large collections of vectors.
from langchain.text_splitter import NLTKTextSplitter
Imports NLTKTextSplitter, a text splitting tool from LangChain that uses the Natural Language
Toolkit (NLTK) to split text into manageable chunks for processing.

https://www.c-sharpcorner.com/ebooks/ 55
from langchain.memory import ConversationBufferMemory
Imports ConversationBufferMemory, a memory management class from LangChain that stores
the history of a conversation to maintain context between user inputs and AI responses.
from langchain.chains import RetrievalQA, ConversationChain
Imports RetrievalQA, a class used to perform question-answering tasks with retrieval
capabilities, which allows the model to search for relevant information before generating an
answer.
Imports ConversationChain, a class used for managing conversational interactions with an AI
model, maintaining context, and generating responses based on conversation history.
from prompts.prompts import templates
Imports templates from prompts.prompts, which likely contains predefined prompt templates
used to structure interactions with the AI model.
from langchain.prompts.prompt import PromptTemplate
Imports PromptTemplate from LangChain, which is used to create prompt templates for
structuring inputs to AI models.
from langchain.chat_models import ChatOpenAI
Imports ChatOpenAI, a class for interacting with OpenAI's chat models within LangChain, which
enables conversational capabilities.
from PyPDF2 import PdfReader
Imports PdfReader from PyPDF2, a library for reading and extracting text from PDF files.
from prompts.prompt_selector import prompt_sector
Imports prompt_sector from prompts.prompt_selector,
which is likely a function or module that helps select or generate prompts based on a given
sector or context.

Text Splitting:
def embedding(text):
"""embeddings"""
text_splitter = NLTKTextSplitter()
texts = text_splitter.split_text(text)
This step splits the input text into smaller chunks or segments. This is necessary because large
texts need to be divided into manageable pieces for processing.
• NLTKTextSplitter(): Initializes the text splitter using the NLTK library.
• split_text(text): Splits the provided text into chunks. This method handles the
segmentation of text based on tokenization rules defined in NLTK.

Creating Embeddings:
embeddings = OpenAIEmbeddings()
Creates an instance of OpenAIEmbeddings to convert the text chunks into vector embeddings.
These embeddings are numerical representations of the text that capture semantic meaning.

https://www.c-sharpcorner.com/ebooks/ 56
Building a Similarity Search System:
docsearch = FAISS.from_texts(texts, embeddings)
Initializes a similarity search system using FAISS (Facebook AI Similarity Search).
FAISS.from_texts(texts, embeddings): Creates a FAISS index from the text chunks and their
corresponding embeddings. This index allows for efficient similarity searches by comparing
embeddings.

Returning the Document Search System:


return docsearch
Returns the FAISS index (docsearch) that can be used to perform similarity searches on the text
embeddings.

Creating a PDF Reader Instance:


def resume_reader(resume):
pdf_reader = PdfReader(resume)
Initializes a PdfReader instance from the PyPDF2 library to handle the PDF file specified by the
resume parameter.
This parameter should be a file-like object or file path pointing to the PDF resume.

Extracting Text from Each Page:


text = ""
for page in pdf_reader.pages:
text += page.extract_text()
return text
Iterates through each page of the PDF to extract the text.
• pdf_reader.pages: Provides access to the individual pages of the PDF.
• page.extract_text(): Extracts the text content from a single page. The extracted text is
appended to the text variable, which accumulates the text from all pages.
• Returns the combined text extracted from all pages of the PDF resume.

Determine Document Source:


def initialize_session_state(template=None, position=None):
""" initialize session states """
if 'jd' in st.session_state:
st.session_state.docsearch = embedding(st.session_state.jd)
else:
st.session_state.docsearch =
embedding(resume_reader(st.session_state.resume))
Checks if there’s a job description (jd) in the session state. If present, it uses it to create
embeddings; otherwise, it reads and embeds the resume.

Create Retriever:
st.session_state.retriever =
st.session_state.docsearch.as_retriever(search_type="similarity")

https://www.c-sharpcorner.com/ebooks/ 57
Initializes a retriever using the embeddings, set to perform similarity searches.

Set Up Prompt Template:


if 'jd' in st.session_state:
Interview_Prompt = PromptTemplate(input_variables=["context",
"question"],
template=template)
st.session_state.chain_type_kwargs = {"prompt":
Interview_Prompt}
else:
st.session_state.chain_type_kwargs =
prompt_sector(position, templates)
Sets up the prompt template based on whether a job description or resume is provided. Uses a
specified template or retrieves one based on the position.

Initialize Conversation Memory:


st.session_state.memory = ConversationBufferMemory()
Initializes a memory buffer to keep track of the conversation history.

Initialize Interview History:


st.session_state.history = []
Sets up an empty list to store the history of the conversation.

Initialize Token Count:


st.session_state.token_count = 0
Initializes the token count to track the usage of tokens in the conversation.

Create Interview Guideline:


llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0.6, )
st.session_state.guideline = RetrievalQA.from_chain_type(
llm=llm,
chain_type_kwargs=st.session_state.chain_type_kwargs,
chain_type='stuff',
retriever=st.session_state.retriever,
memory=st.session_state.memory).run(
"Create an interview guideline and prepare only one
questions for each topic. Make sure the questions tests the technical
knowledge")
Uses an LLM to generate an interview guideline with one question per topic.

Create Conversation Chain for Interviewing:


llm = ChatOpenAI(
model_name="gpt-3.5-turbo",

https://www.c-sharpcorner.com/ebooks/ 58
temperature=0.8, )
PROMPT = PromptTemplate(
input_variables=["history", "input"],
template="""I want you to act as an interviewer strictly
following the guideline in the current conversation.

Ask me questions and wait for my answers


like a real person.
Do not write explanations.
Ask question like a real person, only one
question at a time.
Do not ask the same question.
Do not repeat the question.
Do ask follow-up questions if necessary.
You name is GPTInterviewer.
I want you to only reply as an interviewer.
Do not write all the conversation at once.
If there is an error, point it out.

Current Conversation:
{history}

Candidate: {input}
AI: """)
st.session_state.screen = ConversationChain(prompt=PROMPT, llm=llm,
memory=st.session_s
tate.memory)
Sets up a conversation chain for the interviewer to interact with the candidate based on the
provided prompt.

Create Conversation Chain for Feedback:


llm = ChatOpenAI(
model_name = "gpt-3.5-turbo",
temperature = 0.5,)
st.session_state.feedback = ConversationChain(
prompt=PromptTemplate(input_variables = ["history",
"input"], template = templates.feedback_template),
llm=llm,
memory = st.session_state.memory,
)
Sets up a conversation chain for generating feedback based on the provided feedback template.

https://www.c-sharpcorner.com/ebooks/ 59
Openai_whisper.py
Import Required Libraries:
import openai
import os
import wave
• openai: Used to interact with OpenAI’s API for transcription.
• os: Used to fetch environment variables.
• wave: Used for reading and writing WAV audio files.

Set API Key for OpenAI:


openai.api_key = os.getenv("OPENAI_API_KEY")
Fetches the OpenAI API key from the environment variables and sets it for authenticating API
requests.

Configuration Class for WAV Files:


class Config:
channels = 2
sample_width = 2
sample_rate = 44100
• channels: Number of audio channels (2 for stereo).
• sample_width: Number of bytes per sample (2 bytes for 16-bit audio).
• sample_rate: Number of samples per second (44100 Hz for CD quality).

Function to Save WAV File:


def save_wav_file(file_path, wav_bytes):
with wave.open(file_path, 'wb') as wav_file:
wav_file.setnchannels(Config.channels)
wav_file.setsampwidth(Config.sample_width)
wav_file.setframerate(Config.sample_rate)
wav_file.writeframes(wav_bytes)

Parameters:
• file_path: Path where the WAV file will be saved.
• wav_bytes: Byte data of the WAV audio.

Functionality:
• Opens a WAV file in write-binary mode.
• Sets the number of channels, sample width, and frame rate based on the Config class.
• Writes the byte data to the WAV file.

Function to Transcribe Audio File:


def transcribe(file_path):
audio_file = open(file_path, 'rb')
transcription = openai.Audio.transcribe("whisper-1", audio_file)

https://www.c-sharpcorner.com/ebooks/ 60
return transcription['text']
file_path: Path to the audio file to be transcribed.

Functionality:
• Opens the audio file in read-binary mode.
• Uses OpenAI’s Whisper model ("whisper-1") to transcribe the audio.
• Returns the transcription text from the API response.

App_utils.py
Imports:
def switch_page(page_name: str):
from streamlit.runtime.scriptrunner import RerunData,
RerunException
from streamlit.source_util import get_pages
• RerunData and RerunException are used to handle the rerun mechanism of Streamlit to
switch pages.
• get_pages retrieve information about the pages available in the Streamlit application.

Inner Function: standardize_name


def standardize_name(name: str) -> str:
return name.lower().replace("_", " ")
This helper function standardizes page names by converting them to lowercase and replacing
underscores with spaces. This ensures consistency when comparing page names.

Main Function: switch_page


def standardize_name(name: str)
page_name = standardize_name(page_name)
page_name is the name of the page you want to switch to.
Standardize the Page Name: Convert the provided page name to a standardized format.
pages = get_pages("home.py") # OR whatever your main page is
called
Retrieve Pages: get_pages("home.py") retrieves information about all pages from the specified
main script (e.g., "home.py").
for page_hash, config in pages.items():
if standardize_name(config["page_name"]) == page_name:
raise RerunException(
RerunData(
page_script_hash=page_hash,
page_name=page_name,
)
)
Find Matching Page: Loop through the pages to find the one with a name that matches the
standardized page_name.

https://www.c-sharpcorner.com/ebooks/ 61
Trigger Page Switch: If a match is found, raise a RerunException with RerunData to switch to
the desired page.
page_names = [standardize_name(config["page_name"]) for config in
pages.values()]
raise ValueError(f"Could not find page {page_name}. Must be one of
{page_names}")
Page Not Found: If no match is found, construct a list of available page names and raise a
ValueError indicating the desired page could not be found.
Usage:
This function allows programmatically switching between pages in a Streamlit application based
on page names.
Standardization: It ensures page name comparisons are case-insensitive and underscores are
handled properly.
Exception Handling: Utilizes RerunException to trigger a page switch and ValueError for error
reporting if the page is not found.

Aws/Synthesize_speech.py
Imports and Setup
import boto3
import streamlit as st
from contextlib import closing
import os
import sys
import subprocess
from tempfile import gettempdir
from langchain_community.callbacks.manager import get_openai_callback
• boto3: AWS SDK for Python to interact with AWS services.
• streamlit: For building web applications.
• contextlib.closing: Ensures resources are properly closed.
• os, sys, subprocess: Used for file operations and executing system commands.
• tempfile.gettempdir: Retrieves the system's temporary directory path.
• get_openai_callback: (Assumed to be for managing OpenAI callbacks, though not used
here).

AWS Session Configuration


Session = boto3.Session(
aws_access_key_id ="enter_your_aws_access_key_id",
aws_secret_access_key = 'enter_your_aws_access_key',
region_name = "asia-northeast3-a"
)
boto3.Session: Configures AWS credentials and region. Replace
"enter_your_aws_access_key_id" and 'enter_your_aws_access_key' with your actual AWS
credentials.

https://www.c-sharpcorner.com/ebooks/ 62
synthesize_speech Function
def synthesize_speech(text):
Polly = Session.client("polly")
response = Polly.synthesize_speech(
Text=text,
OutputFormat="mp3",
VoiceId="Joanna")
• Session.client("polly"): Creates a client for AWS Polly service.
• Polly.synthesize_speech: Requests Polly to convert text to speech.
• Text: The input text to convert to speech.
• OutputFormat="mp3": Specifies the audio format.
• VoiceId="Joanna": Chooses the voice for speech synthesis.
if "AudioStream" in response:
with closing(response["AudioStream"]) as stream:
output = os.path.join(gettempdir(), "speech.mp3")
• "AudioStream": Checks if the response contains audio data.
• closing(response["AudioStream"]): Ensures the audio stream is properly closed after
use.
• gettempdir(): Gets the path to the system's temporary directory.
• output: File path for the generated MP3 file.
try:
# Open a file for writing the output as a binary stream
with open(output, "wb") as file:
file.write(stream.read())
except IOError as error:
# Could not write to file, exit gracefully
print(error)
sys.exit(-1)
• open(output, "wb"): Opens a file in binary write mode to save the audio data.
• file.write(stream.read()): Writes the audio data to the file.
• Error Handling: Exits gracefully if an error occurs during file operations.
else:
# The response didn't contain audio data, exit gracefully
print("Could not stream audio")
sys.exit(-1)
'''
# Play the audio using the platform's default player
if sys.platform == "win32":
os.startfile(output)
else:
# The following works on macOS and Linux. (Darwin = mac, xdg-
open = linux).
opener = "open" if sys.platform == "darwin" else "xdg-open"
subprocess.call([opener, output])'''
return output

https://www.c-sharpcorner.com/ebooks/ 63
Playback: This code is commented out but would automatically play the audio file using the
default player for the operating system:
Windows: Uses os.startfile.
macOS/Linux: Uses subprocess.call with open or xdg-open.
Note: In this chapter, I have provided the code of all the main modules of this project with
detailed explanation. For a short review and source code you can refer article An Intelligent
Podcast Interviewer: AI Samvadini

https://www.c-sharpcorner.com/ebooks/ 64
6
Future Scope
Overview

In this chapter, we explore the future possibilities for AI Samvadini,


a smart podcast interviewer.

https://www.c-sharpcorner.com/ebooks/ 65
Enhanced Natural Language Processing (NLP) Capabilities
• Advanced Question Generation: Implement more sophisticated algorithms for
generating a wider variety of interview questions based on job descriptions and resumes.
• Sentiment Analysis: Integrate sentiment analysis to gauge the tone and confidence of
user responses.

Multi-language Support
• Language Translation: Add support for multiple languages to accommodate users from
different linguistic backgrounds.
• Localized Interview Guidelines: Provide interview questions and feedback in different
languages based on user preferences.

Voice and Speech Enhancements


• Voice Recognition Improvement: Implement more advanced voice recognition
capabilities for better accuracy in transcribing user responses.
• Customizable Speech Synthesis: Allow users to choose from different voices or
accents for the text-to-speech functionality.

User Personalization
• Profile Management: Allow users to create and manage profiles, including uploading
multiple resumes or customizing interview guidelines.
• Personalized Feedback: Offer tailored feedback based on the user’s past interactions
and performance history.

Advanced Analytics and Reporting


• Performance Analytics: Provide detailed analytics and reports on user performance
during mock interviews.
• Trend Analysis: Analyze trends and common weaknesses across multiple users to offer
generalized improvement tips.

User Interaction and Engagement


• Gamification: Introduce gamified elements such as scoring systems, badges, and
leaderboards to increase user engagement.
• Live Interaction: Incorporate live feedback mechanisms where users can receive real-
time responses from human reviewers or mentors.

Improved Security and Privacy


• Enhanced Data Encryption: Implement advanced encryption techniques for storing and
transmitting sensitive user data.
• User Consent Management: Develop robust mechanisms for managing user consent
and data privacy preferences.
These enhancements can expand the functionality of the project, improve user experience, and
adapt the system to meet evolving needs and technological advancements.

https://www.c-sharpcorner.com/ebooks/ 66
OUR MISSION
Free Education is Our Basic Need! Our mission is to empower millions of developers worldwide by
providing the latest unbiased news, advice, and tools for learning, sharing, and career growth. We’re
passionate about nurturing the next young generation and help them not only to become great
programmers, but also exceptional human beings.

ABOUT US
CSharp Inc, headquartered in Philadelphia, PA, is an online global community of software
developers. C# Corner served 29.4 million visitors in year 2022. We publish the latest news and articles
on cutting-edge software development topics. Developers share their knowledge and connect via
content, forums, and chapters. Thousands of members benefit from our monthly events, webinars,
and conferences. All conferences are managed under Global Tech Conferences, a CSharp
Inc sister company. We also provide tools for career growth such as career advice, resume writing,
training, certifications, books and white-papers, and videos. We also connect developers with their poten-
tial employers via our Job board. Visit C# Corner

MORE BOOKS

You might also like