0% found this document useful (0 votes)
8 views

Let's build and LLM agent in Python

Uploaded by

debanjanbusy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Let's build and LLM agent in Python

Uploaded by

debanjanbusy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Let’s build an

LLM agent 🤖
in Python 🐍
Step-by-step
Why agents?
Because Large Language
Models alone are not enough
to accurately answer complex
tasks that require

> External information that


was not present in the training
dataset used to fit the LLM
paramaters

or

> Many reasoning steps


For example
If you ask OpenAI gpt-3.5-turbo
LLM to

"Create a plot with Python of the


number of games won by the
Golden State Warriors in each of
the last 2 NBA seasons."

The response you get is

-> a 100% correct Python code


because GPT-3.5 was trained on
vast amounts of Python code
available on the Internet, and
hence can work as a great code
generator tool.
BUT
→ BUT, the data plotted is wrong,
as it is not from the last 2 seasons,
but from 2019 and 2020.
How to fix this?
To fix this you need to
supercharge your LLM with

→ A tool to retrieve external data


needed for the task

and

→ The reasoning capabilities to


decide when and how to use the
tool, to answer the user query.

And this is what agents 🤖 are all


about.

Let’s build one step-by-step


Step 1. Pick your LLM
You can either use

→ a local LLM running on your laptop with


Ollama, or

→ connect to an external API like OpenAI


or Cohere.
Step 2. Define your
tools 🛠️
In this case we need:

1) An Internet search tool


2) A PythonREPL so the Agent can run
Python code to debug its output
Step 3. Build the
ReAct agent
This step is super-simple thanks to a
library like LangChainA, which provides
an implementation of the ReAct agent
logic.

We first define the agent logic, using the

> tools we created


> the base LLM we picked, and
> an initial prompt of that can help us
steer the the agent in the right direction
From this agent logic, we define the
AgentExecutor, which is the runtime for
an agent, that

> calls the agent


> executes the actions it chooses
> passes the action outputs back to the
agent
> and repeats
Step 4. Run the
agent 🏃
We pass the input query

"Create a plot with Python of the


number of games won by the Golden
State Warriors in each of the last 2 NBA
seasons."

to the agent_executor
...and the agent start reasoning and
acting.

Until it produces the correct plot!


Attention 📢
As you start experimenting with different
input prompts, you will find scenarios in
which the agent is not able to solve the
task.

Moving from agent prototypes to


production-ready agents is all about
polishing rough edges, and ensuring the
agent has access to the right tools.

But this is something we will leave for


another day.
Wanna learn to
build LLM
apps?
Every day I share free, hands-
on content, on production-
grade ML, to help you build
real-world ML products.

🔔
𝗙𝗼𝗹𝗹𝗼𝘄 𝗺𝗲 and 𝗰𝗹𝗶𝗰𝗸 𝗼𝗻 𝘁𝗵𝗲 so
you don't miss what's coming
next

You might also like