0% found this document useful (0 votes)
20 views

Lang Chain

LangChain is an open-source framework that provides reusable building blocks for creating applications powered by large language models (LLMs). It aims to simplify the development process and enable easier experimentation with LLMs. LangChain components can be chained together to perform complex natural language processing tasks.

Uploaded by

Madhi Arasan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Lang Chain

LangChain is an open-source framework that provides reusable building blocks for creating applications powered by large language models (LLMs). It aims to simplify the development process and enable easier experimentation with LLMs. LangChain components can be chained together to perform complex natural language processing tasks.

Uploaded by

Madhi Arasan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

LangChain

By,
Madhiarasan
Table of contents
5 W’s Who, What, What, Where, Why

Prompt Template Modify response of LLMs for a specific task

Temporary memory Give the ability of understanding the context to the LLMs

Text splitters Handle the token limit of the user

Chain Connect with different functionalities

Vector DB Chat with your data

Semantic Cache Faster answering without connecting to LLM

Internet access Designing a web crawler for collecting necessary data

Evaluation LangSmith Intro


5 W’s of
01 LangChain
What: LangChain is an open-source framework that simplifies the process of
creating LLM-driven applications. It provides a library of building blocks that can
be "chained" together to perform complex NLP tasks.

Why: Reducing the amount of code required to build LLM applications.


Making it easier to experiment and prototype with LLMs.
Providing tools for customizing the outputs of LLMs

Who: LangChain is for developers who want to build applications powered by


large language models (LLMs). This can include specialists in natural language
processing (NLP) as well as those new to the field.

When: The initial release of LangChain was in October 2022. The


most recent stable release was February 19, 2024.

Where: LangChain is available as Python and Javascript


libraries.
02 Prompt Template
03 Temporary memory
Buffer memory creation
Conversation Buffer: Stores all chat messages in order.
Conversation Buffer Window: Stores a limited window of recent chat messages.
Entity Memory: Keeps track of named entities mentioned in the conversation.
Conversation Knowledge Graph: Builds a graph representing relationships between
entities discussed.
Conversation Summary: Creates a concise summary of the conversation history.
Conversation Summary Buffer: Stores summaries of recent conversation segments.
Conversation Token Buffer: Stores individual tokens from the conversation for analysis.

https://colab.research.google.com/drive/1g1w-
ArWojLS4frjI_xNa25S0051Kol0E
04 Text Splitters
Types of text splitters
Character-based Splitters

Sentence-based Splitters

Token-based Splitters

Code Splitters

Custom Splitters
05 Chains
https://python.langchain.com/docs/modules/chains
06 Vector DB
Feature Redis Milvus Weaviate Qdrant Pinecone
In-
memory Vector Vector Vector Vector
Type data store database database database database
No
Open (Freemiu
Source Yes Yes Yes Yes m)
Vector Data & Vector Vector
Speed & embeddin embeddin embeddin embeddi
Focus Caching gs gs gs ngs
Caching, Machine Machine Machine Machine
Use Sessions, learning learning learning learning
Cases etc. models models models models
(similarit
(similarity (similarity (similarity y
search) search) search) search)

https://colab.research.google.com/drive/
1lwBeJLLYXdOivlUZZAJ7KysL6Yc9xdpl?
07 Semantic Cache
08 Internet Access
09 LangSmith
Debugging:
LangSmith allows you to inspect individual LLM calls within your LangChain code,
providing visibility into the inputs, outputs, and intermediate states. This helps pinpoint
errors or performance bottlenecks.

You can leverage features like:


Chain visualization to see the entire execution flow.
Debugging tools to inspect intermediate data and identify problematic steps.

Testing and Evaluation:


LangSmith facilitates comprehensive testing of LLM applications.
You can create test cases with specific inputs and expected outputs.
Evaluate LLM performance using various metrics, including accuracy, fluency, and
coherence.
Compare the performance of different LLM models for a given task.

Explore functionalities like:


Test case management to define and execute test scenarios.
Evaluation tools to measure LLM performance based on chosen metrics.
A/B testing to compare different LLM models or configurations.
Monitoring and Logging:

LangSmith enables continuous monitoring of LLM applications.


Track key statistics like LLM call latency, response times, and error rates.
Gather insights into LLM behavior over time to identify trends and potential issues.

Utilize features like:

Monitoring dashboards to visualize LLM performance metrics.


Logging capabilities to capture detailed execution data for analysis.

Deployment and Infrastructure Management:

Simplifying debugging and troubleshooting of LLM applications across different


environments (e.g., dev, test, production).
Enabling smooth integration of LLMs with your existing infrastructure and monitoring
tools.

You might also like