LangChain Notes
LangChain Notes
Explanation:
o import os: Imports the os module, which provides a way to use
operating system-dependent functionality, like managing environment
variables.
o from getpass import getpass: Imports the getpass function, which
allows you to securely prompt the user for a password or token.
o os.environ["HUGGINGFACEHUB_API_TOKEN"] = getpass('HF Token: '):
Explanation:
o This line initializes an instance of the HuggingFaceHub class using the
Hugging Face model HuggingFaceH4/zephyr-7b-beta.
o repo_id: Specifies the model to be used from Hugging Face's model
repository. Here, HuggingFaceH4/zephyr-7b-beta is the chosen model.
o model_kwargs: Provides a dictionary of keyword arguments that
configure the model's behavior:
temperature: Controls the randomness of the model's output.
A lower value (e.g., 0.3) makes the output more deterministic,
while a higher value makes it more random.
max_new_tokens: Specifies the maximum number of new
tokens (words or word pieces) the model can generate in a
single output.
repetition_penalty: A value greater than 1.0 penalizes
repetitive text generation, helping to produce more varied
responses.
return_full_text: When set to False, only the generated text is
returned, excluding the input prompt.
5. Generating a Response with the Language
Model:
Explanation:
o Defines a query variable containing the text 'write a paragraph on life
in detail'.
o llm.invoke(query) sends the query to the model and gets the
generated output. This function call triggers the model to generate a
paragraph on "life."
Explanation:
o from langchain_core.prompts import ChatPromptTemplate: Imports the
ChatPromptTemplate class, which helps in structuring prompts for
chatbot-like interactions.
o ChatPromptTemplate.from_messages(...): Creates a prompt template
that simulates a conversation between a system (the AI's role or
behavior) and a human (the user):
The system message defines the AI's role as a freelancer
teaching other about freelance techniques.
The human message is a placeholder ({input}) for the user input
in the conversation.
7. Formatting the Template and Getting a
Response:
Explanation:
o template.format_messages(...) formats the template with the provided
input ('I want you to tell me how to earn doing programming?').
o llm.invoke(prompt) sends the formatted prompt to the model and gets
the AI's response.
Explanation:
o Like the previous step, but with a different input, asking for the top
skills required in the modern day in JSON format.
Explanation:
o from langchain.output_parsers import StructuredOutputParser,
ResponseSchema: Imports classes for parsing structured output.
o ResponseSchema(...): Creates schemas for expected parts of the
response (question and answer).
o StructuredOutputParser.from_response_schemas(...): Creates an output
parser that knows how to parse responses based on the defined
schemas.
Explanation:
o output_parsers.get_format_instructions(): Retrieves instructions on how
to format the input/output according to the parser's requirements.
o print(instruct): Prints the formatting instructions.
Explanation:
o ChatPromptTemplate.from_template(...): Creates another prompt
template, providing detailed instructions for the AI's behavior. The AI is
instructed to only answer questions about freelancing and finance.
12. Formatting the New Template and Parsing the
Response:
Explanation:
o template2.format_messages(...): Formats the new template with the
provided input and instructions.
o llm.invoke(prompt2): Sends the formatted template to the model to get
a response.
o output_parsers.parse(response1): Parses the model's response using
the structured output parser.
o print(response1): Prints the raw response from the model.