Workflow For A Function Calling Agent - LlamaIndex
Workflow For A Function Calling Agent - LlamaIndex
This notebook walks through setting up a Workflow to construct a function calling agent from scratch.
Function calling agents work by using an LLM that supports tools/functions in its API (OpenAI, Ollama, Anthropic, etc.) to
call functions an use tools.
Our workflow will be stateful with memory, and will be able to call the LLM to select tools and process incoming user
messages.
import os
os.environ["OPENAI_API_KEY"] = "sk-proj-..."
# Add Phoenix
span_phoenix_processor = SimpleSpanProcessor(
HTTPSpanExporter(endpoint="https://app.phoenix.arize.com/v1/traces")
)
🦙
Since workflows are async first, this all runs fine in a notebook. If you were running in your own code, you would want to
use asyncio.run() to start an async event loop if one isn't already running.
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Designing the Workflow
An agent consists of several steps
1. Handling the latest incoming user message, including adding to memory and getting the latest chat history
4. If there are tool calls, call them, and loop until there are none
The other steps will use the built-in StartEvent and StopEvent events.
class InputEvent(Event):
input: list[ChatMessage]
class ToolCallEvent(Event):
tool_calls: list[ToolSelection]
class FunctionOutputEvent(Event):
output: ToolOutput
With our events defined, we can construct our workflow and steps.
Note that the workflow automatically validates itself using type annotations, so the type annotations on our steps are very
helpful!
🦙
from llama_index.core.tools.types import BaseTool
from llama_index.core.workflow import Workflow, StartEvent, StopEvent, step
class FuncationCallingAgent(Workflow):
def __init__(
self,
*args: Any,
llm: FunctionCallingLLM | None = None,
tools: List[BaseTool] | None = None,
**kwargs: Any,
) -> None:
super().__init__(*args, **kwargs)
self.tools = tools or []
self.memory = ChatMemoryBuffer.from_defaults(llm=llm)
self.sources = []
@step
async def prepare_chat_history(self, ev: StartEvent) -> InputEvent:
# clear sources
self.sources = []
@step
async def handle_llm_input(
self, ev: InputEvent
) -> ToolCallEvent | StopEvent:
chat_history = ev.input
tool_calls = self.llm.get_tool_calls_from_response(
response, error_on_no_tool_call=False
)
if not tool_calls:
return StopEvent(
result={"response": response, "sources": [*self.sources]}
)
else:
return ToolCallEvent(tool_calls=tool_calls)
@step
async def handle_tool_calls(self, ev: ToolCallEvent) -> InputEvent:
tool_calls = ev.tool_calls
tools_by_name = {tool.metadata.get_name(): tool for tool in self.tools}
tool_msgs = []
🦙
"tool_call_id": tool_call.tool_id,
"name": tool.metadata.get_name(),
}
if not tool:
tool_msgs.append(
ChatMessage(
role="tool",
content=f"Tool {tool_call.tool_name} does not exist",
additional_kwargs=additional_kwargs,
)
)
continue
try:
tool_output = tool(**tool_call.tool_kwargs)
self.sources.append(tool_output)
tool_msgs.append(
ChatMessage(
role="tool",
content=tool_output.content,
additional_kwargs=additional_kwargs,
)
)
except Exception as e:
tool_msgs.append(
ChatMessage(
role="tool",
content=f"Encountered error in tool call: {e}",
additional_kwargs=additional_kwargs,
)
)
chat_history = self.memory.get()
return InputEvent(input=chat_history)
prepare_chat_history() : This is our main entry point. It handles adding the user message to memory, and uses the
memory to get the latest chat history. It returns an InputEvent .
handle_llm_input() : Triggered by an InputEvent , it uses the chat history and tools to prompt the llm. If tool calls are
found, a ToolCallEvent is emitted. Otherwise, we say the workflow is done an emit a StopEvent
handle_tool_calls() : Triggered by ToolCallEvent , it calls tools with error handling and returns tool outputs. This event
triggers a loop since it emits an InputEvent , which takes us back to handle_llm_input()
🦙
return x * y
tools = [
FunctionTool.from_defaults(add),
FunctionTool.from_defaults(multiply),
]
agent = FuncationCallingAgent(
llm=OpenAI(model="gpt-4o-mini"), tools=tools, timeout=120, verbose=True
)
ret = await agent.run(input="Hello!")
Running step prepare_chat_history
Step prepare_chat_history produced event InputEvent
Running step handle_llm_input
Step handle_llm_input produced event StopEvent
print(ret["response"])
assistant: Hello! How can I assist you today?
print(ret["response"])
assistant: The result of \((2123 + 2321) \times 312\) is \(1,386,528\).