0% found this document useful (0 votes)
4 views

LangChain Notes

The document provides a comprehensive guide on using the LangChain library, including installation of required packages, importing necessary classes, and setting up environment variables for accessing Hugging Face models. It details the process of initializing a language model, creating prompts, generating responses, and parsing outputs using structured output parsers. Additionally, it highlights key concepts such as prompt engineering and the importance of API tokens for secure access.

Uploaded by

chinxpie4
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

LangChain Notes

The document provides a comprehensive guide on using the LangChain library, including installation of required packages, importing necessary classes, and setting up environment variables for accessing Hugging Face models. It details the process of initializing a language model, creating prompts, generating responses, and parsing outputs using structured output parsers. Additionally, it highlights key concepts such as prompt engineering and the importance of API tokens for secure access.

Uploaded by

chinxpie4
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Lang Chain Notes 🦜🔗

Installing Required Packages:

 Explanation: The pip install command is used to install Python packages.


Here, we're installing two packages: langchain and langchain-community.
o langchain is a library for building language model applications.

o langchain-community is an additional package that contains


community-contributed implementations for langchain.

2. Importing the HuggingFaceHub Class:

 Explanation: This line imports the HuggingFaceHub class from the


langchain_community.llms module.
o HuggingFaceHub is an interface that connects the langchain library
with language models hosted on Hugging Face's Model Hub.

3. Setting Up Environment Variables:

 Explanation:
o import os: Imports the os module, which provides a way to use
operating system-dependent functionality, like managing environment
variables.
o from getpass import getpass: Imports the getpass function, which
allows you to securely prompt the user for a password or token.
o os.environ["HUGGINGFACEHUB_API_TOKEN"] = getpass('HF Token: '):

 Sets the Hugging Face Hub API token as an environment variable


named HUGGINGFACEHUB_API_TOKEN.
 getpass('HF Token: ') prompts the user to input their Hugging
Face API token securely (without echoing it back in the terminal).

4. Initializing the Language Model:

 Explanation:
o This line initializes an instance of the HuggingFaceHub class using the
Hugging Face model HuggingFaceH4/zephyr-7b-beta.
o repo_id: Specifies the model to be used from Hugging Face's model
repository. Here, HuggingFaceH4/zephyr-7b-beta is the chosen model.
o model_kwargs: Provides a dictionary of keyword arguments that
configure the model's behavior:
 temperature: Controls the randomness of the model's output.
A lower value (e.g., 0.3) makes the output more deterministic,
while a higher value makes it more random.
 max_new_tokens: Specifies the maximum number of new
tokens (words or word pieces) the model can generate in a
single output.
 repetition_penalty: A value greater than 1.0 penalizes
repetitive text generation, helping to produce more varied
responses.
 return_full_text: When set to False, only the generated text is
returned, excluding the input prompt.
5. Generating a Response with the Language
Model:

 Explanation:
o Defines a query variable containing the text 'write a paragraph on life
in detail'.
o llm.invoke(query) sends the query to the model and gets the
generated output. This function call triggers the model to generate a
paragraph on "life."

6. Creating a ChatPrompt Template:

 Explanation:
o from langchain_core.prompts import ChatPromptTemplate: Imports the
ChatPromptTemplate class, which helps in structuring prompts for
chatbot-like interactions.
o ChatPromptTemplate.from_messages(...): Creates a prompt template
that simulates a conversation between a system (the AI's role or
behavior) and a human (the user):
 The system message defines the AI's role as a freelancer
teaching other about freelance techniques.
 The human message is a placeholder ({input}) for the user input
in the conversation.
7. Formatting the Template and Getting a
Response:

 Explanation:
o template.format_messages(...) formats the template with the provided
input ('I want you to tell me how to earn doing programming?').
o llm.invoke(prompt) sends the formatted prompt to the model and gets
the AI's response.

8. Creating Another Prompt and Generating a


Response:

 Explanation:
o Like the previous step, but with a different input, asking for the top
skills required in the modern day in JSON format.

9. Importing Output Parsers and Defining Response


Schemas:

 Explanation:
o from langchain.output_parsers import StructuredOutputParser,
ResponseSchema: Imports classes for parsing structured output.
o ResponseSchema(...): Creates schemas for expected parts of the
response (question and answer).
o StructuredOutputParser.from_response_schemas(...): Creates an output
parser that knows how to parse responses based on the defined
schemas.

10. Getting Format Instructions for the Parser:

 Explanation:
o output_parsers.get_format_instructions(): Retrieves instructions on how
to format the input/output according to the parser's requirements.
o print(instruct): Prints the formatting instructions.

11. Creating Another Template with the Output


Parser:

 Explanation:
o ChatPromptTemplate.from_template(...): Creates another prompt
template, providing detailed instructions for the AI's behavior. The AI is
instructed to only answer questions about freelancing and finance.
12. Formatting the New Template and Parsing the
Response:

 Explanation:
o template2.format_messages(...): Formats the new template with the
provided input and instructions.
o llm.invoke(prompt2): Sends the formatted template to the model to get
a response.
o output_parsers.parse(response1): Parses the model's response using
the structured output parser.
o print(response1): Prints the raw response from the model.

o type(response1) and type(parser): Checks and prints the types of


response1 and parser.

Summary of the Workflow:


1. Install Packages: Install necessary libraries (langchain, langchain-
community).
2. Import Modules and Set Up Environment: Import necessary modules,
including the Hugging Face Hub and set up the API token for Hugging Face
access.
3. Initialize Language Model: Set up the language model with specific
parameters.
4. Create and Format Prompts: Define different prompts for conversations
with the model and get responses.
5. Output Parsing: Use structured output parsers to format and interpret the
model's responses.
Notes:
 LLMs (Large Language Models): These are advanced AI models that can
generate human-like text based on given inputs. They are used for various
applications, such as chatbots, content creation, etc.
 Prompt Engineering: Involves crafting the input (prompt) to get the desired
response from an LLM. Different prompts can yield different outputs, even
from the same model.
 API Tokens: Used for authentication when accessing services like Hugging
Face. Keep your API tokens secure to protect your access.
 Structured Output Parsing: A method to interpret the model's response in
a specific format, making it easier to understand and use programmatically.

You might also like