0% found this document useful (0 votes)
13 views

Giza Docs

Giza documentation provides guidance on building AI solutions for web3, including creating AI agents, transpiling models, and deploying inference endpoints. It outlines installation procedures for the Giza ecosystem, including the CLI, Agents SDK, and Datasets SDK, along with steps for creating and managing accounts and agents. The document also details how to access curated blockchain datasets for machine learning and offers quickstart instructions for using Giza products.

Uploaded by

i.asiamolov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Giza Docs

Giza documentation provides guidance on building AI solutions for web3, including creating AI agents, transpiling models, and deploying inference endpoints. It outlines installation procedures for the Giza ecosystem, including the CLI, Agents SDK, and Datasets SDK, along with steps for creating and managing accounts and agents. The document also details how to access curated blockchain datasets for machine learning and offers quickstart instructions for using Giza products.

Uploaded by

i.asiamolov
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 236

Giza Docs

Welcome
Giza

👋 Welcome to Giza documentation


Build reliable, scalable and easy to integrate AI solutions for web3.

What can you do with Giza?


Create AI Agents

Create AI Agents and connect it to your smart contracts.

giza agents create --model-id <model-id> --version-id <version-id> --n

Transpile

Transpile your model to a ZKML model.

giza transpile awesome_model.onnx --output-path my_awesome_model

Deploy Inference Endpoint

Deploy a verifiable inference endpoint.

giza endpoints deploy --model-id 1 --version-id 1

Use Web3 Datasets

Access to curated blockchain datasets for Machine Learning.


from giza.datasets import DatasetsHub, DatasetsLoader

hub = DatasetsHub()

🌱 Where to start?
⏩ Quickstart 🧱 Products
Learn the basics to get started using Giza Consult the documentation for all Giza
Platform, AI Agents and Datatsets SDK. products.

🧑‍🎓️ Tutorials
Learn by doing, with many tutorials.
Installation
How to install Giza ecosystem,

To use Giza ecosystem, you'll need Giza CLI and Giza Agents SDK.

In that ressource you'll see how to:

Handle Python versions with Pyenv


Install CLI
Install Agents SDK
Install Datasets SDK

Handling Python versions with Pyenv


All our tools have been tested with Python 3.11, so we recommend using this version.

You should install Giza tools in a virtual environment. If you’re unfamiliar with Python
virtual environments, take a look at this guide. A virtual environment makes it easier to
manage different projects and avoid compatibility issues between dependencies.

Install Python 3.11 using pyenv

pyenv install 3.11.0

Set Python 3.11 as local Python version:

pyenv local 3.11.0

Create a virtual environment using Python 3.11:

pyenv virtualenv 3.11.0 my-env

Activate the virtual environment:


pyenv activate my-env

Now, your terminal session will use Python 3.11 for this project.

Install giza-sdk to set up the complete Giza


stack.
giza-sdk is a meta-package that installs the entire Giza stack, including CLI, agents,
datasets and zkcook.

Option 1 (recommended) - Installation with pip

pip install giza-sdk

Option 2 - Install from source

Install giza-sdk from source with the following command:

pip install git+https://github.com/gizatechxyz/giza-sdk

This command installs the bleeding edge main version rather than the latest
stable version. The main version is useful for staying up-to-date with the
latest developments. For instance, if a bug has been fixed since the last
official release but a new release hasn’t been rolled out yet. However, this
means the main version may not always be stable. We strive to keep the
main version operational, and most issues are usually resolved within a few
hours or a day. If you run into a problem, please open an Issue so we can fix it
even sooner!

Once installed, you can use each of these packages independently under the Giza
namespace. Some examples might be:
from giza.datasets import DatasetsLoader
from giza.agents import GizaAgent
from giza.zkcook import serialize_model

And the CLI will be usabe from the command line:

giza --help

Install CLI
The Giza CLI facilitates user creation and login, transpiles models into ZK models, and
deploys a verifiable inference endpoint. These steps are essential for creating an Agent.

Option 1 (recommended) - Installation with pip

pip install giza-cli

Option 2 - Installing from source

Clone the repository and install it with pip :

git clone [email protected]:gizatechxyz/giza-cli.git


cd giza-cli
pip install .

Or install it directly from the repo:

pip install git+ssh://[email protected]/gizatechxyz/giza-cli.git

Install Agents SDK


The Agent SDK lets you build verifiable inferences and connect them to smart contracts,
directly in Python.

Option 1 (recommended) - Install from PyPi

pip install giza-agents

Option 2 - Install from source

Install Agents from source with the following command:

pip install 'giza-agents @ git+https://github.com/gizatechxyz/gi

This command installs the bleeding edge main version rather than the latest
stable version. The main version is useful for staying up-to-date with the
latest developments. For instance, if a bug has been fixed since the last
official release but a new release hasn’t been rolled out yet. However, this
means the main version may not always be stable. We strive to keep the
main version operational, and most issues are usually resolved within a few
hours or a day. If you run into a problem, please open an Issue so we can fix it
even sooner!

Option 3 - Editable install

You will need an editable install if you’d like to:

Use the main version of the source code.


Contribute to Agents or ⚡ Actions and need to test changes in the code.
Clone the repository and install Agents with the following commands:

git clone https://github.com/gizatechxyz/giza-agents.git


cd giza-agents
pip install -e .
These commands will link the folder you cloned the repository to and your
Python library paths. Python will now look inside the folder you cloned to in
addition to the normal library paths. For example, if your Python packages
are typically installed in
~/anaconda3/envs/main/lib/python3.11/site-packages/ , Python will
also search the folder you cloned to: ~/giza-agents/ .

You must keep the giza-agents folder if you want to keep using the library.

Now you can easily update your clone to the latest version of Agents

and Actions with the following command:

cd ~/giza-agents/
git pull

Your Python environment will find the main version of ⚡ Actions on the
next run.

Install Datasets SDK


The Dataset SDK gives you access to curated datasets for Web3.

Option 1 (recommended) - Install from PyPi

Install the Datasets SDK with the following command:

pip install giza-datasets

Option 2 - Install from source

Install giza-datasets from source with the following command:

pip install git+https://github.com/gizatechxyz/datasets


This command installs the bleeding edge main version rather than the latest
stable version. The main version is useful for staying up-to-date with the
latest developments. For instance, if a bug has been fixed since the last
official release but a new release hasn’t been rolled out yet. However, this
means the main version may not always be stable. We strive to keep the
main version operational, and most issues are usually resolved within a few
hours or a day. If you run into a problem, please open an Issue so we can fix it
even sooner!

Option 3 - Editable install

You will need an editable install if you’d like to:

Use the main version of the source code.


Contribute to giza-datasets and need to test changes in the code.

Clone the repository and install giza-datasets with the following commands:

git clone https://github.com/gizatechxyz/datasets.git


cd datasets
pip install -e .

These commands will link the folder you cloned the repository to and your
Python library paths. Python will now look inside the folder you cloned to in
addition to the normal library paths. For example, if your Python packages
are typically installed in
~/anaconda3/envs/main/lib/python3.7/site-packages/ , Python will also
search the folder you cloned to: ~/datasets/ .

You must keep the datasets folder if you want to keep using the library.

Now you can easily update your clone to the latest version of giza-datasets
with the following command:

cd ~/datasets/
git pull
Your Python environment will find the main version of giza-datasets on the
next run.
Quickstart
In this resource, you'll learn the basics to get started using Giza products!

Before you begin, make sure you have all the necessary libraries installed.

Installation Guide

Create and Login a User

From your terminal, create a Giza user through our CLI in order to access the
Giza Platform:

giza users create

After creating your user, log into Giza:

giza users login

Optional: you can create an API Key for your user in order to not regenerate
your access token every few hours.

giza users create-api-key

Transpile a Model and Deploy and Inference Endpoint

This step is only necessary if you have not yet deployed an verifiable
inference endpoint.

Transpilation
Transpilation is a crucial process in the deployment of Verifiable Machine
Learning models. It involves the transformation of a model into a Cairo model.
These models can generate ZK proofs.
The transpilation process starts by reading the model from the specified
path. The model is then sent for transpilation.

giza transpile awesome_model.onnx --output-path my_awesome_model


[giza][2024-02-07 16:31:20.844] No model id provided, checking i
[giza][2024-02-07 16:31:20.845] Model name is: awesome_model
[giza][2024-02-07 16:31:21.599] Model Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.436] Version Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.437] Sending model for transpilation
[giza][2024-02-07 16:32:13.511] Transpilation is fully compatibl
[giza][2024-02-07 16:32:13.516] Transpilation recieved! ✅
[giza][2024-02-07 16:32:14.349] Transpilation saved at: my_aweso

During transposition, an instance of a model and a version were created on


the Giza platform.

For more information about transpilation, please check the Transpile


resource.

Deploy an Inference Endpoint


To create a new service, users can employ the deploy command. This
command facilitates the deployment of a verifiable machine learning service
ready to accept predictions at the /cairo_run endpoint, providing a
straightforward method for deploying and using machine learning capabilities
that can easily be consumed as and API endpoint.

> giza endpoints deploy --model-id 1 --version-id 1


▰▰▰▰▰▱▱ Creating endpoint!
[giza][2024-02-07 12:31:02.498] Endpoint is successful ✅
[giza][2024-02-07 12:31:02.501] Endpoint created with id -> 1 ✅
[giza][2024-02-07 12:31:02.502] Endpoint created with endpoint U

Create your first Agent

To create a new agent an inference endpoint should have been deployed


prior. If you don't have one, please follow the previous section.
Step 1: Create an Account
The first step will be to create an account (wallet) using Ape's framework this
will be the account that we will use to sign the transactions of the smart
contract. We can do this by running the following command and providing the
required data.

$ ape accounts generate <account name>


Enhance the security of your account by adding additional random
Show mnemonic? [Y/n]: n
Create Passphrase to encrypt account:
Repeat for confirmation:
SUCCESS: A new account '0x766867bB2E3E1A6E6245F4930b47E9aF54cEba

❗ It will ask you for a passphrase, make sure to save it in a safe place as it
will be used to unlock the account when signing.

We encourage the creation of a new account for each agent, as it will allow
you to manage the agent's permissions and access control more effectively,
but importing accounts is also possible.

Step 2: Fund the Account


Before we can create an AI Agent, we need to fund the account with some
ETH. You can do this by sending some ETH to the account address generated
in the previous step.

If you are using Sepolia testnet, you can get some testnet ETH from a faucet
like Alchemy Sepolia Faucet or LearnWeb3 Faucet. If you are on mainnet you
will need to transfer funds to it.

Step 3: Create and Agent using the CLI


With a funded account we can create an AI Agent. We can do this by running
the following command
giza agents create --model-id <model-id> --version-id <version-i

The information needed is:

model-id : the id of the model used in the endpoint.


version-id : the id of the version used in the endpoint.
name : name for the agent.
description : an optional description of the agent.

This command will prompt you to use an ape account, specify the one that
you want to use and it will create the agent.

Step 4: Setup your Agent with the SDK


Now you are set to use an Agent through code and interact with smart
contracts.

⚠️A environment variable of <ACCOUNT_NAME>_PASSPHRASE must be in the


environment. This is needed to decrypt the account and sign transactions.
from giza.agents import GizaAgent

agent = GizaAgent(
id=<model-id>,
version_id=<version-id>,
contracts={
# Contracts uses <alias>:<contract-address> dict
"mnist": "0x17807a00bE76716B91d5ba1232dd1647c4414912",
"token": "0xeF7cCAE97ea69F5CdC89e496b0eDa2687C95D93B",
},
# Specify the chain that you are using
chain="ethereum:sepolia:geth",
)

result = agent.predict(input_feed=[42], verfiable=True)

# This is only using read functions, so no need to sign


with agent.execute() as contracts:
# Accessing the value will block until the prediction is ver
if result.value > 0.5:
# Read the contract name
result = contracts.mnist.name()
print(result)
elif result.value < 0.5:
# Read the contract name
result = contracts.token.name()
print(result)

Use Datasets SDK

This section is mainly intended for developers who are already accustomed to
fundamentals of Python, as well as its common ML libraries and frameworks.

1. Import giza-datasets
from giza.datasets import DatasetsHub, DatasetsLoader

Additionally, it might be required to run the following lines. See


DatasetsLoader.
import os
import certifi

os.environ['SSL_CERT_FILE'] = certifi.where()

2. Query the datasets using a


DatasetsHub object
hub = DatasetsHub()

With the DatasetsHub() object, we can know query the DatasetsHub to find
the perfect dataset for our ML model. See DatasetsHub for further
instructions. Alternatively, you can check DatasetsHub pages to explore the
available datasets from your browser.

Lets use the list_tags() function to list all the tags and then
get_by_tag() to query all the datasets with the "Yearn-v2" tag.

print(hub.list_tags())

[ 'Trade Volume', 'DeFi', 'Yearn-v2','Interest Rates','compound-


v2',....]

Yearn-v2 looks interesting, lets search all the datasets that have the 'Yearn-
v2' tag.

datasets = hub.get_by_tag('Yearn-v2')

for dataset in datasets:


hub.describe(dataset.name)
Details for yearn-individual-deposits
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
┃ Attribute ┃ Value
┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
│ Path │ gs://datasets-giza/Yearn/Yearn_Individual_Dep
├───────────────┼───────────────────────
│ Description │ Individual Yearn Vault deposits
├───────────────┼───────────────────────
│ Tags │ DeFi, Yield, Yearn-v2, Ethereum, Deposits
├───────────────┼───────────────────────
│ Documentation │ https://datasets.gizatech.xyz/hub/yearn/indiv
└───────────────┴───────────────────────

yearn-individual-deposits looks great!

3. Load a dataset using DatasetLoader


loader = DatasetsLoader()

Having instantiated the DatasetsLoader() , all we need to do is load the


dataset using the name we have queried using DatasetsHub() .

df = loader.load('yearn-individual-deposits')

df.head()

shape: (5, 7)

Keep in mind that giza-datasets uses Polars (and not Pandas) as the
underlying DataFrame library.

Perfect, the Dataset is loaded correctly and ready to go! Now we can use our
preferred ML Framework and start building.
Contribution Guidelines
We strongly encourage our users to create tutorials for their open-source ML
projects using the Giza Platform and its components. When your PR for a new
tutorial in Giza Hub gets merged, you become eligible for OnlyDust grants!

For consistency and clarity, we ask all user tutorials to follow certain criteria and
guidelines, which will be described below. Tutorials such as the MNIST Model are great
examples of content and style.

Giza Tutorials
Giza tutorials are meant to be complete, open-source projects built by the Giza Data
Science team and Giza Users, to showcase the use of the Giza Framework in an end-to-
end setting, as well as to illustrate potential ZKML use cases and inspire new ideas. Given
the relative immaturity of the ZKML ecosystem, we believe there are many ZKML use
cases yet to be discovered and we welcome everyone to be a part of the first explorers!

Giza Tutorial Project Requirements

General
Not all good ML use cases are good ZKML use cases. Given the additional
complexity and costs, verifiability of the model inferences must be a requirement or
must provide a significant advantage for the given task. As Giza, we expect all
submitted tutorials to have clear ZKML use cases.
Tutorials are created within Python Virtual Environments and are reproducible, using
a requirements.txt , poetry.lock in the root dir.
(If the project uses data that is not available through Giza Datasets) : Project contains
data collection scripts to load/fetch the data inside the project. Do not push data
files in Giza Hub Repo.
Similarly, do not push any model files. Model files include .pt, .onnx, or verifiable
models Cairo.

Model Development
Given some of the current restrictions in model complexity, it is expected to have low
performance for some of the ZKML models. However, we expect all projects to have a
fitting set of performance metrics given the task, a section as part of the
documentation on how to interpret them, as well as steps that can potentially be
taken from a model architecture point of view to improve the model performance.
We expect the model development to be well documented and follow the general data
science conventions and order of operations.

Readme
The project repository must contain a readme file with the following sections:

We strongly encourage the tutorial authors to use code snippets for the Project Installation,
Overview of Model Development, Model Performance, and Giza Integration Sections.
Showing individual commands and lines of code is always more informative than
instructions.

Problem Setting - Introduce the task and the problem setting. Why is ZKML
useful/necessary? Are there any papers/resources that can be read to learn more
about the problem?
Project Installation - How can another developer reproduce the project, install
dependencies etc.
Overview of Model Development - The crucial steps of the model development
process. Not every step is necessary, but model architecture and/or model training
scheme probably are.
Model Performance - The description of the model performance metric, as well as the
measurements from the developed model from the testing. (Here is a good point to
also discuss possible improvements to the model architecture.)
Giza Integration - How can we use Giza CLI & giza-agents to make the model
verifiable? What are the individual Giza components that are used along the way?
Affiliations - If affiliated with any team/organization, make sure to have that it be
stated in the readme for clarity.

How to contribute?
After making sure your tutorials follow the guidelines above, simply create a new branch
in Giza-Hub Github repository, and push your model under awesome-giza-agents folder.
If your tutorial is original, reproducible, fits the guidelines and is a valuable addition to
our zkML use-case collection, you will be eligible for OnlyDust rewards!
Products
All Products

Platform AI Agents
Automate and streamline processes for Build agents that automate smart strategies
zero-knowledge machine learning. on top of decentralized protocols.

Datasets
Curated, structured, labeled blockchain data
to train and build your machine learning
models.
Platform
Resources
Users
Manage users.

Create
Login
Create API Key
Me

Create a user
Allows to create a user using the CLI. The username must be unique and the email
account should not have been used previously for another user.

> giza users create

Enter your username 😎


: gizabrain
Enter your password 🥷
: (this is a secret)
Enter your email 📧
: [email protected]
[giza][2023-06-23 12:29:41.417] User created ✅ . Check for a verification
This will create an inactive user in Giza, to activate it you need to verify your user
through the verification email.

If there is an error or you want to have more information about what it's going on there is
a --debug flag that will add more information about the error. This will print outgoing
requests to the API, debug logs and python traceback about what happened.

⚠️Note: be aware that the debug option will print everything that its going to the API, in
this case the password will be printed as plain text in the terminal, if you are using the
debug option to fill an issue make sure to remove the credentials.
Login a user
Log into Giza platfrom and retrieve a JWT for authentication. This JWT will be stored to
authenticate you later until the token expires.

You need te have an active account to log in

> giza users login

Enter your username 😎


: gizabrain
Enter your password :🥷
[giza][2023-06-23 12:32:17.917] Log into Giza
[giza][2023-06-23 12:32:18.716] ⛔️
Could not authorize the user ⛔️
[giza][2023-06-23 12:32:18.718] ⛔️
Status code -> 400 ⛔️
[giza][2023-06-23 12:32:18.719] ⛔️
Error message -> {'detail': 'Inactive u

Once activated you can successfully log into Giza:

> giza users login

Enter your username 😎


: gizabrain
Enter your password :🥷
[giza][2023-07-12 10:52:25.199] Log into Giza
[giza][2023-07-12 10:52:46.998] Credentials written to: /Users/gizabrain/
[giza][2023-07-12 10:52:47.000] Successfully logged into Giza ✅
If you want force the renewal of the token you can use --renew to force the log in. If the
flag is not present we verify if there has been a previous log in and check that the token
it's still valid.

> giza users login

Enter your username 😎


: gizabrain
Enter your password :🥷
[giza][2023-07-12 10:55:26.219] Log into Giza
[giza][2023-07-12 10:55:26.224] Token it still valid, re-using it from ~/
[giza][2023-07-12 10:55:26.224] Successfully logged into Giza ✅
With --renew :
> giza users login --renew

Enter your username 😎


: gizabrain
Enter your password : 🥷
[giza][2023-07-12 10:56:44.316] Log into Giza
[giza][2023-07-12 10:56:44.979] Credentials written to: /Users/gizabrain/
[giza][2023-07-12 10:56:44.980] Successfully logged into Giza ✅
Note: --debug its also available.

Create API key


Create an API key for the current user. This API key will be stored and will be used to
authenticate the user in the future.

You need te have an active account to log in

> giza users create-api-key


[giza][2024-01-17 15:27:27.936] Creating API Key ✅
[giza][2024-01-17 15:27:53.605] API Key written to: /Users/gizabrain/.giza
[giza][2024-01-17 15:27:53.606] Successfully created API Key. It will be u

Now you can use the API key to authenticate yourself withouth the need of login again
and again.

NOTE: The usage of API key is less secure than JWT, so use it with caution.

Retrieve my user info


Retrieve information about the current user.

You need te have an active account


> giza users me

[giza][2023-07-12 10:59:43.821] Retrieving information about me!


[giza][2023-07-12 10:59:43.823] Token it still valid, re-using it from ~/
{
"username": "gizabrain",
"email": "[email protected]",
"is_active": true
}

Note: --debug its also available.


Models
Handle your models.

In Giza Plateform, a model represents a container for versions of your machine learning
model. This design allows you to iterate and improve your ML model by creating new
versions for it. Each model can have multiple versions, providing a robust and flexible
way to manage the evolution of your ML models. This traceability feature ensures that
you have a clear record of the original model used for transpilation, who performed the
transpilation, and the output generated.

Remember, you need to be logged in to use these functionalities!

Create a model
List models
Retrieve model information
Transpile a model

Create a Model
Creating a new model in Giza is a straightforward process. You only need to provide a
name for the model using the --name option. You can also add a description of the
model using the --description option. This can be helpful for keeping track of
different models and their purposes.

Here's how you can do it:

> giza models create --name my_new_model --description "A New Model"
[giza][2023-09-13 13:24:28.223] Creating model ✅
{
"id": 1,
"name": "my_new_model",
"description": "A New Model"
}
Typically, the transpile command is used to handle model creation. During this
process, the filename is checked for an existing model. If none is found, a new model is
automatically created. However, manual creation of a model is also supported. For more
information, refer to the transpile documentation.

List Models
Giza provides a simple and efficient way to list all the models you have stored on the
server. This feature is especially useful when you have multiple models and need to
manage them effectively.

To list all your models, you can use the list command. This command retrieves and
displays a list of all models stored in the server. Each model's information is printed in a
json format for easy readability and further processing.

Here's how you can do it:

> giza models list


[giza][2023-09-13 13:11:39.403] Listing models ✅
[
{
"id": 1,
"name": "my_new_model",
"description": "A New Model"
},
{
"id": 2,
"name": "Test",
"description": "A Model for testing different models"
}
]

Retrieve Model Information


You can retrieve detailed information about a model stored on the server using its unique
model id. This includes its name, description, and other id:

> giza models get --model-id 1 # When we create model for you we output i
[giza][2023-09-13 13:17:53.594] Retrieving model information ✅
{
"id": 1,
"name": "my_new_model",
"description": "A New Model"
}

Now we can see that we have a model successfully transpiled!

Transpile a model
Note: This is explained extensively in the transpile documentation.

Transpiling a model in Giza is a crucial step in the model deployment process as an


endpoint. Transpilation is the process of converting your machine learning model into a
format that can be executed on Giza. Depending the ZKML framework chosen, this
process involves converting the model into a series of Cairo instructions.

When you execute the 'transpile' command, it initially checks for the presence of the
model on the Giza platform. If no model is found, it automatically generates one and
performs the transpilation. Here is an example of the command:

giza transpile awesome_model.onnx --output-path my_awesome_model

It's worth noting that if you already have created a model, you can transpile it by
specifying the model ID:

giza transpile --model-id 1 awesome_model.onnx --output-path my_awesome_mo

This method ise useful when you intend to create a new version of an existing model.
For more information, refer to the transpile documentation.
Versions
Manage your model versions.

In Giza Platform, a version represents a specific iteration of your machine learning model
within a Giza Model. This design allows you to iterate and improve your ML model by
creating new versions for it. Each model can have multiple versions, providing a robust
and flexible way to manage the evolution of your ML models. This traceability feature
ensures that you have a clear record of each version of your model.

Remember, you need to be logged in to use these functionalities!

Retrieve version information


List versions
Transpile a model version
Download a transpiled version

Retrieve Version Information


You can retrieve detailed information about a specific version of a model using its unique
model ID and version ID. This includes its version number, size, description, status,
creation date, and last update date.

> giza versions get --model-id 1 --version-id 1


[giza][2023-09-13 14:38:30.965] Retrieving version information ✅
{
"version": 1,
"size": 52735,
"status": "COMPLETED",
"message": "Transpilation Successful",
"description": "Initial version of the model",
"created_date": "2023-07-04T11:15:09.448709",
"last_update": "2023-08-25T11:08:51.815545"
}
Retrieve Transpilation Logs
The logs of a transpilation can be retrieved using the provided logs command:

> giza versions logs --model-id 1 --version-id 1

List Versions
Giza Platform provides a simple and efficient way to list all the versions of a specific
model you have. This feature is especially useful when you have multiple versions of a
model and need to manage them effectively.

To list all your versions of a model, you can use the list command. Each version's
information is printed in a json format for easy readability and further processing.

Here's how you can do it:


> giza versions list --model-id 1
[giza][2023-09-13 14:41:09.209] Listing versions for the model ✅
[
{
"version": 1,
"size": 52735,
"status": "COMPLETED",
"message": "Transpilation Successful",
"description": "Initial version of the model",
"created_date": "2023-07-04T11:15:09.448709",
"last_update": "2023-08-25T11:08:51.815545"
},
{
"version": 2,
"size": 52735,
"status": "COMPLETED",
"message": "Transpilation Successful!",
"description": "Intial version",
"created_date": "2023-09-13T10:24:20.018476",
"last_update": "2023-09-13T10:24:24.376009"
}
]

Transpile a Model Version


Note: This is explained extensively in the transpile documentation.

Transpiling a model version in Giza Platform is a crucial step in the model deployment
process as an endpoint. Transpilation is the process of converting your machine learning
model into a format that can be executed on Giza Platform.

When you transpile a model, you're essentially creating a new version of that model. Each
version represents a specific iteration of your machine learning model, allowing you to
track and manage the evolution of your models effectively.
> giza versions transpile awesome_model.onnx --output-path my_awesome_mode
[giza][2023-09-13 12:56:43.725] No model id provided, checking if model ex
[giza][2023-09-13 12:56:43.726] Model name is: awesome_model
[giza][2023-09-13 12:56:43.978] Model Created with id -> 1! ✅
[giza][2023-09-13 12:56:44.568] Sending model for transpilation ✅
[giza][2023-09-13 12:56:55.577] Transpilation recieved! ✅
[giza][2023-09-13 12:56:55.583] Transpilation saved at: cairo_model

Once the transpilation process is complete, a new version of the model is created in Giza
Platform. The version will be downloaded and saved at the specified output path, but you
can also execute later with the download command to download it again.

Download a Transpiled Version


Once a model has been successfully transpiled, it's not necessary to go through the
transpilation process again. The transpiled version is stored and can be downloaded
anytime you need it. This is done using the download command in the CLI. This
command specifically requires the model_id and version_id to accurately identify
and download the correct transpiled version. This feature saves time and computational
resources, making the management of your models more efficient.

> giza versions download --model-id 1 --version-id 1 --output-path path


[giza][2023-08-04 10:33:14.271] Transpilation is ready, downloading! ✅
[giza][2023-08-04 10:33:15.134] Transpilation saved at: path

Let's check the downloaded version:

> tree cairo_model/


cairo_model
├── inference
│ ├── Scarb.toml
│ └── src
│ └── lib.cairo
└── initializers
├── node_l1
│ ├── Scarb.toml
│ └── src
│ └── lib.cairo
For more information on how to transpile a version, refer to the transpile documentation.
Transpile
Transpilation is a crucial process in the deployment of Verifiable Machine Learning
models. It involves the transformation of a model into a ZK model. These models can
generate proofs that can be verified, ensuring the integrity and reliability of the model's
predictions.

The transpilation of a model to a ZK model is powered by ✨ Orion✨ .


How to transpile a model?
Transpilation results
What is happening with the models and versions?

How to transpile a model?


Method 1 (Recommended) - Using giza transpile command.

> giza transpile awesome_model.onnx --output-path my_awesome_mod


[giza][2024-02-07 16:31:20.844] No model id provided, checking i
[giza][2024-02-07 16:31:20.845] Model name is: awesome_model
[giza][2024-02-07 16:31:21.599] Model Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.436] Version Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.437] Sending model for transpilation
[giza][2024-02-07 16:32:13.511] Transpilation is fully compatibl
[giza][2024-02-07 16:32:13.516] Transpilation recieved! ✅
[giza][2024-02-07 16:32:14.349] Transpilation saved at: my_aweso

This is the simplest method and is recommended for most users.


When you run this command, Giza handles everything for you:

It first checks if a model with the specified name already exists on Giza
Platform. If not, it creates a new model and then transpiles it.
The output of this process is saved in the cairo_model/ folder by
default, but you can specify a different output path using the
--output-path option.
The result of the transpilation process is saved at the provided path, in this
case, my_awesome_model/ .

> tree my_awesome_model/


my_awesome_model
├── inference
│ ├── Scarb.toml
│ └── src
│ └── lib.cairo
└── initializers
├── node_l1
│ ├── Scarb.toml
│ └── src
│ └── lib.cairo

When we transpile a model we have two possibilities: a fully compatible model


and a partially compatible one.

A model is fully compatible when all the operators that the model uses are
supported by the Transpiler, if this happens the model is compiled after
transpilation and we save the compiled file on behalf of the user to use later
for endpoint deployment (endpoint docs). This will be shown in the output of
the transpile command:

[giza][2024-02-07 16:32:13.511] Transpilation is fully compatibl


Version compiled and Sierra is saved at Giza ✅
If a model is partially supported, we will create a warning in the output stating
that not all the operators are supported right now. If it is partially supported
the Cairo code can still be modified for later compilation and endpoint.

[WARN][2024-02-07 16:42:31.209] 🔎
Transpilation is partially
supported. Some operators are not yet supported in the
Transpiler/Orion
[WARN][2024-02-07 16:42:31.211] Please check the compatibility
list in Orion:
https://cli.gizatech.xyz/frameworks/cairo/transpile#supported-
operators

Please check the compatibility list.


Method 2 - Manually creating a model and then transpiling it

This method gives you more control over the process.

1. First, you create a model manually using the giza models create
command.
2. After the model is created, you can transpile it using the
giza transpile --model-id ...

This method is useful when you want to specify particular options or


parameters during the model creation and transpilation process.

> giza models create --name awesome_model --description "A Model


[giza][2023-09-13 14:04:59.532] Creating model ✅
{
"id": 2,
"name": "awesome_model",
"description": "A Model for testing different models"
}

> giza transpile --model-id 2 awesome_model.onnx --output-path n


[giza][2023-09-13 14:08:38.022] Model found with id -> 2! ✅
[giza][2024-02-07 14:08:38.432] Version Created with id -> 1! ✅
[giza][2023-09-13 14:08:38.712] Sending model for transpilation
[giza][2023-09-13 14:08:49.879] Transpilation recieved! ✅
[giza][2023-09-13 14:08:49.885] Transpilation saved at: new_awes

Method 3: Using a previous model

If you have a previously created model, you can transpile it by indicating the
model-id in the giza transpile --model-id ... or
giza versions transpile --model-id command.

This method is useful when you want to create a new version of an existing
model.
The output of the transpilation process is saved in the same location as
the original model.
# Using the previous model (id: 2) we can transpile a new model,
giza transpile --model-id 29 awesome_model.onnx --output-path ne
[giza][2023-09-13 14:11:30.015] Model found with id -> 2! ✅
[giza][2024-02-07 14:11:30.225] Version Created with id -> 2! ✅
[giza][2023-09-13 14:11:30.541] Sending model for transpilation
[giza][2023-09-13 14:11:41.601] Transpilation recieved! ✅
[giza][2023-09-13 14:11:41.609] Transpilation saved at: new_awes

Transpilation Results
When a version is transpiled the version can be in the following statuses:

FAILED : the transpilation of the model failed


COMPLETED : the version transpilation is completed and the version is FULLY
compatible. This means that the version operators all are supported during
transpilation, and the model has been compiled to sierra and saved in the platform.
The model can be directly deployed without the need to provide a sierra file. This
model is "frozen" so it will not allow for code or model updates and if any changes are
done to the model a new version should be created.
PARTIALLY_SUPPORTED : not all the operators are supported in the transpilation but
they might be supported in Orion, so a partially working code will be returned,
allowing for modifications of the code to update the version into a fully compatible
one. Once this version is updated we will compile the version and if it is successful the
new code and the sierra will be uploaded to Giza, the status will be updated to
COMPLETED and the version will be frozen not allowing any more modifications.

We try to support all the available operators on Orion but there might be a little lag between
Orion's implementation and transpilation availability

How to update a transpilation


If your model version is in a PARTIALLY_SUPPORTED status, you can work towards
achieving a COMPLETED status by updating the transpilation. The update process
involves modifying the unsupported operators and compiling the model. Here's how to
update a transpilation, from creation to fully supported:

1. Transpile the model with giza transpile

2. Modify your cairo model to address the unsupported operators.


3. Execute the giza version update command. This command needs scarb to be
installed (docs), compilation will be attempted and if successful code and sierra file
will be updated in Giza.

Example: Updating a Version

Say you have an awesome_model.onnx that is PARTIALLY_SUPPORTED :

❯ giza transpile awesome_model.onnx


[giza][2024-02-12 13:19:55.957] No model id provided, checking if model
exists ✅
[giza][2024-02-12 13:19:55.958] Model name is: awesome_model
[giza][2024-02-12 13:19:56.207] Model Created with id -> 1! ✅
[giza][2024-02-12 13:19:56.710] Version Created with id -> 1! ✅
[giza][2024-02-12 13:19:56.711] Sending model for transpilation ✅
[WARN][2024-02-12 13:20:07.207] 🔎
Transpilation is partially supported.
Some operators are not yet supported in the Transpiler/Orion
[WARN][2024-02-12 13:20:07.209] Please check the compatibility list in
Orion: https://cli.gizatech.xyz/frameworks/cairo/transpile#supported-
operators
[giza][2024-02-12 13:20:07.773] Downloading model ✅
[giza][2024-02-12 13:20:07.783] model saved at: cairo_model

This version has some operators that are not available in the transpilation, but they might
be supported in Orion. When a model is not fully compatible, in the
inference/lib.cairo a comment will be shown:

let node_8 = // Operator LogSoftmax is not yet supported by the Giza


transpiler. If Orion supports it, consider manual implementation.;

Let's say that LogSoftMax is the unsupported operator, if we check the Orion
Documentation, we can see that it is supported. Now we could add the necessary code to
add our operator (including imports):

let node_8 = NNTrait::logsoftmax(node_7_output_0, 1);

LogSoftMax serves as an example and does not mean that it is not currently supported

After the manual implementation, we can trigger the update with the update command:

❯ giza versions update --model-id 1 --version-id 1 --model-path


cairo_model
[giza][2024-02-12 13:35:28.993] Checking version ✅
scarb 2.4.3 (5dbab1f31 2024-01-04)
cairo: 2.4.3 (https://crates.io/crates/cairo-lang-compiler/2.4.3)
sierra: 1.4.0

[giza][2024-02-12 13:35:29.138] Scarb is installed, proceeding with the


build.
Compiling inference v0.1.0
(/Users/gizabrain/cairo_model/inference/Scarb.toml)
error: Unexpected argument type. Expected:
"@orion::operators::tensor::core::Tensor::<?13>", found:
"orion::operators::tensor::core::Tensor::
<orion::numbers::fixed_point::implementations::fp16x16::core::FP16x16>".
--> /Users/gizabrain/cairo_model/inference/src/lib.cairo:15:34
let node_8 = NNTrait::logsoftmax(node_7_output, 1);
^********************^

error: could not compile `inference` due to previous error


[ERROR][2024-02-12 13:35:34.847] Compilation failed
[ERROR][2024-02-12 13:35:34.848] ⛔️
Error building the scarb model ⛔️
[ERROR][2024-02-12 13:35:34.848] ⛔️
Version could not be updated ⛔️
[ERROR][2024-02-12 13:35:34.849] Check scarb documentation
https://docs.swmansion.com/scarb/

Here's what is going on:

We want to update the first version of the first model with our new code, the code is at
--model-path cairo_model

The CLI checks if scarb is available in the system


scarb build is attempted
We still have some errors that we have to fix

In this case, we purposely forgot to add the @ to showcase a common scenario:

let node_8 = NNTrait::logsoftmax(@node_7_output, 1);

Once everything is fixed we can attempt the update again:

❯ giza versions update --model-id 1 --version-id 1 --model-path cairo_mode


[giza][2024-02-12 13:43:25.913] Checking version ✅
scarb 2.4.3 (5dbab1f31 2024-01-04)
cairo: 2.4.3 (https://crates.io/crates/cairo-lang-compiler/2.4.3)
sierra: 1.4.0

[giza][2024-02-12 13:43:26.064] Scarb is installed, proceeding with the bu


Compiling inference v0.1.0 (/Users/gizabrain/cairo_model/inference/Scar
Finished release target(s) in 6 seconds
[giza][2024-02-12 13:43:32.326] Compilation successful
[giza][2024-02-12 13:43:33.708] Sierra updated ✅
[giza][2024-02-12 13:43:34.962] Version updated ✅
{
"version": 1,
"size": 8858,
"status": "COMPLETED",
"message": null,
"description": "Initial version",
"created_date": "2024-02-12T12:19:56.324501",
"last_update": "2024-02-12T12:43:34.906667"
}

The version has been updated successfully! Now we have a fully compatible model that
generated a sierra and can be easily deployed! Now the version will be frozen so it won't
allow for any more updates.

When we refer to a version of a model, we refer to the code/artifact of a specific model at a


specific point in time. The model is frozen for tracking purposes.
What is happening with the models and
versions?
In Giza, a model is essentially a container for versions. Each version represents a
transpilation of a machine learning model at a specific point in time. This allows you to
keep track of different versions of your model as it evolves and improves over time.

To check the current models and versions that have been created, you can use the
following steps:

1. Use the giza models list command to list all the models that have been created.
2. For each model, you can use the giza versions list --model-id ... command
to list all the versions of that model.

Remember, each version represents a specific transpilation of the model. So, if you have
made changes to your machine learning model and transpiled it again, it will create a
new version.

This system of models and versions allows you to manage and keep track of the
evolution of your machine learning models over time.

For example, let's say you have created a model called awesome_model and transpiled it
twice. This will create two versions of the model, version 1 and version 2. You can check
the status of these versions using the giza versions list --model-id ...
command.
giza versions list --model-id 29
[giza][2023-09-13 14:17:08.006] Listing versions for the model ✅
[
{
"version": 1,
"size": 52735,
"status": "COMPLETED",
"message": "Transpilation Successful!",
"description": "Initial version",
"created_date": "2023-09-13T12:08:38.177605",
"last_update": "2023-09-13T12:08:43.986137"
},
{
"version": 2,
"size": 52735,
"status": "COMPLETED",
"message": "Transpilation Successful!",
"description": "Initial version",
"created_date": "2023-09-13T12:11:30.165440",
"last_update": "2023-09-13T12:11:31.625834"
}
]
Endpoints
Endpoints in our platform provide a mechanism for creating services that accept
predictions via a designated endpoint. These services, based on existing platform
versions, leverage Cairo under the hood to ensure provable inferences. Using the CLI,
users can effortlessly deploy and retrieve information about these machine learning
services.

Deploying a model as an endpoint


To deploy a model, you must first have a version of that model. This can be done by
creating a version of your model.

To create a new service, users can employ the deploy command. This command
facilitates the deployment of a verifiable machine learning service ready to accept
predictions at the /cairo_run endpoint, providing a straightforward method for
deploying and utilizing machine learning capabilities.

> giza endpoints deploy --model-id 1 --version-id 1 model.sierra.json


▰▰▰▰▰▱▱ Creating endpoint!
[giza][2024-02-07 12:31:02.498] Endpoint is successful ✅
[giza][2024-02-07 12:31:02.501] Endpoint created with id -> 1 ✅
[giza][2024-02-07 12:31:02.502] Endpoint created with endpoint URL: https

If a model is fully compatible the sierra file is not needed and can be deployed without
using it in the command:

> giza endpoints deploy --model-id 1 --version-id 1


▰▰▰▰▰▱▱ Creating endpoint!
[giza][2024-02-07 12:31:02.498] Endpoint is successful ✅
[giza][2024-02-07 12:31:02.501] Endpoint created with id -> 1 ✅
[giza][2024-02-07 12:31:02.502] Endpoint created with endpoint URL: https
For a partially compatible model the sierra file must be provided, if not an error will be
shown.

Example request
Now our service is ready to accept predictions at the provided endpoint URL. To test this,
we can use the curl command to send a POST request to the endpoint with a sample
input.

> curl -X POST https://deployment-gizabrain-38-1-53427f44-dagsgas-ew.a.run


-H "Content-Type: application/json" \
-d '{
"args": "[\"2\", \"2\", \"2\", \"4\", \"1\", \"2\", \"3\", \"4\
}' | jq
>>>>>
{
"result": [
{
"value": {
"val": [
1701737587,
1919382893,
1869750369,
1852252262,
1864395887,
1948284015,
1231974517
]
}
}
]
}

Listing endpoints
The list command is designed to retrieve information about all existing endpoints. It
provides an overview of the deployed machine learning services, allowing users to
monitor and manage multiple endpoints efficiently.

giza endpoints list


>>>>
[giza][2024-01-17 17:19:00.631] Listing endpoints ✅
[
{
"id": 1,
"status": "COMPLETED",
"uri": "https://deployment-gizabrain-1-1-53427f44-dagsgas-ew.a.run.app
"size": "S",
"service_name": "deployment-gizabrain-1-1-53427f44",
"model_id": 1,
"version_id": 1,
"is_active": true
},
{
"id": 2,
"status": "COMPLETED",
"uri": "https://deployment-gizabrain-1-2-53427f44-dagsgas-ew.a.run.app
"size": "S",
"service_name": "deployment-gizabrain-1-2-53427f44",
"model_id": 1,
"version_id": 2,
"is_active": false
}
]

Executing this command will display a list of all current endpoints, including relevant
details such as service names, version numbers, and endpoint status.

To list only active endpoints you can use the flag --only-active/-a so only active ones
are shown.

Retrieving an endpoint
For retrieving detailed information about a specific endpoint, users can utilize the get
command. This command allows users to query and view specific details of a single
endpoint, providing insights into the configuration, status, and other pertinent
information.

> giza endpoints get --endpoint-id 1


>>>>
{
"id": 1,
"status": "COMPLETED",
"uri": "https://deployment-gizabrain-38-1-53427f44-dagsgas-ew.a.run.app
"size": "S",
"service_name": "deployment-gizabrain-38-1-53427f44",
"model_id": 38,
"version_id": 1,
"is_active": true
}

Retrieve logs for an endpoint


If an error has occurred or you want to know more about what is happening under the
hood, you can retrieve the logs of the deployed endpoint.

> giza endpoints logs --endpoint-id 1


>>>>
[giza][2024-05-19 19:29:43.601] Getting logs for endpoint 1 ✅
2024-05-19T10:12:12.635831Z INFO orion_runner: ✅
Sierra program downloa
2024-05-19T10:12:12.635905Z INFO orion_runner: 🚀
Server running on 0.0.

Delete an endpoint
For deleting an endpoint, users can use the delete command. This command facilitates
the removal of a machine learning service, allowing users to manage and maintain their
deployed services efficiently.
> giza endpoints delete --endpoint-id 1
>>>>
[giza][2024-03-06 18:10:22.548] Deleting endpoint 1 ✅
[giza][2024-03-06 18:10:22.830] Endpoint 1 deleted ✅

The endpoints are not fully deleted, so you can still access the underlying proofs generated
by them.

List the proving jobs for an endpoint


To list the proving jobs for an endpoint, we can use the list-jobs command available
for the endpoints. This command will return a list of all the proving jobs for the endpoint
with the request_id for easier tracking.

> giza endpoints list-jobs --endpoint-id 1


[giza][2024-03-06 18:13:50.485] Getting jobs from endpoint 1 ✅
[
{
"id": 1,
"job_name": "proof-20240306-979342e7",
"size": "S",
"status": "Completed",
"elapsed_time": 120.,
"created_date": "2024-03-06T16:12:31.295958",
"last_update": "2024-03-06T16:14:29.952678",
"request_id": "979342e7b94641f0a260c1997d9ccfee"
},
{
"id": 2,
"job_name": "proof-20240306-f6559749",
"size": "S",
"status": "COMPLETED",
"elapsed_time": 120.0,
"created_date": "2024-03-06T16:43:27.531250",
"last_update": "2024-03-06T16:45:17.272684",
"request_id": "f655974900d8479c9bb662a060bc1365"
}
]
List the proofs for an endpoint
To list the proofs for an endpoint, we can use the list-proofs command available for
the endpoints. This command will return a list of all the proofs for the endpoint with the
request_id for easier tracking.

> giza endpoints list-proofs --endpoint-id 1


[giza][2024-03-06 18:15:23.146] Getting proofs from endpoint 32 ✅
[
{
"id": 1,
"job_id": 1,
"metrics": {
"proving_time": 0.03023695945739746
},
"created_date": "2024-03-06T16:44:46.196186",
"request_id": "979342e7b94641f0a260c1997d9ccfee"
},
{
"id": 1,
"job_id": 2,
"metrics": {
"proving_time": 0.07637895945739746
},
"created_date": "2024-03-06T16:44:46.196186",
"request_id": "f655974900d8479c9bb662a060bc1365"
}
]

Verify a proof
After successfully creating a proof for your Orion Cairo model, the next step is to verify
its validity. Giza offers a verification method using the verify command alongside the
endpoint-id and proof-id .
> giza endpoints verify --endpoint-id 1 --proof-id "b14bfbcf250b404192765d
[giza][2024-02-20 15:40:48.560] Verifying proof...
[giza][2024-02-20 15:40:49.288] Verification result: True
[giza][2024-02-20 15:40:49.288] Verification time: 2.363822541

Endpoint Sizes
Size vCPU Memory (GB)

S 2 2

M 4 4

L 4 8

XL 8 16
Agents
Agents are entities designed to assist users in interacting with Smart Contracts by
managing the proof verification of verifiable ML models and executing these contracts
using Ape's framework.

Agents serve as intermediaries between users and Smart Contracts, facilitating seamless
interaction with verifiable ML models and executing associated contracts. They handle
the verification of proofs, ensuring the integrity and authenticity of data used in contract
execution.

Creating an agent
Listing agents
Retrieving an agent
Updating an agent
Deleting an agent
More information

Creating an agent
To create an agent, first you need to have an endpoint already deployed and an ape
account created locally. If you have not yet deployed an endpoint, please refer to the
endpoints documentation. To create the ape account, you can use the
ape accounts generate command:

$ ape accounts generate <account name>


Enhance the security of your account by adding additional random input:
Show mnemonic? [Y/n]: n
Create Passphrase to encrypt account:
Repeat for confirmation:
SUCCESS: A new account '0x766867bB2E3E1A6E6245F4930b47E9aF54cEba0C' with H
The passphrase must be kept secret and secure, as it is used to encrypt the account and is
required to access it. The account name is used to identify the account and along with the
passphrase to perform transactions in the smart contracts.

To create an agent, users can employ the create command. This command facilitates
the creation of an agent, allowing users to interact with deployed endpoints and execute
associated contracts.

During the creation you will be asked to select an account to create the agent. The
account is used to sign the transactions in the smart contracts.

> giza agents create --model-id <model_id> --version-id <version_id> --nam

[giza][2024-04-10 11:50:24.005] Creating agent ✅


[giza][2024-04-10 11:50:24.006] Using endpoint id to create agent, retriev
[giza][2024-04-10 11:50:53.480] Select an existing account to create the a
[giza][2024-04-10 11:50:53.480] Available accounts are:
┏━━━━━━━━━━━━━┓
┃ Accounts ┃
┡━━━━━━━━━━━━━┩
│ my_account │
└─────────────┘
Enter the account name: my_account
{
"id": 1,
"name": <agent_name>,
"description": <agent_description>,
"parameters": {
"model_id": <model_id>,
"version_id": <version_id>,
"endpoint_id": <endpoint_id>,
"alias": "my_account"
},
"created_date": "2024-04-10T09:51:04.226448",
"last_update": "2024-04-10T09:51:04.226448"
}

An Agent can also be created using the --endpoint-id flag, which allows users to
specify the endpoint ID directly.

> giza agents create --endpoint-id <endpoint_id> --name <agent_name> --des


Listing agents
The list command is designed to retrieve information about all existing agents and the
parameters of them.

> giza agents list

[giza][2024-04-10 12:30:05.038] Listing agents ✅


[
{
"id": 1,
"name": "Agent one",
"description": "Agent to handle liquidity pools",
"parameters": {
"model_id": 1,
"version_id": 1,
"endpoint_id": 1,
"account": "awesome_account",
},
"created_date": "2024-04-09T15:07:14.282177",
"last_update": "2024-04-10T10:06:36.928941"
},
{
"id": 2,
"name": "Agent two",
"description": "Agent to handle volatility",
"parameters": {
"model_id": 1,
"version_id": 2,
"endpoint_id": 2,
"account": "another_awesome_account"
},
"created_date": "2024-04-10T09:51:04.226448",
"last_update": "2024-04-10T10:12:18.975737"
}
]

Retrieving an agent
For retrieving detailed information about a specific agent, users can utilize the get
command. This command allows users view the details of a specific agent:

> giza agents get --agent-id 1

{
"id": 1,
"name": "Agent one",
"description": "Agent to handle liquidity pools",
"parameters": {
"model_id": 1,
"version_id": 1,
"endpoint_id": 1,
"account": "awesome_account",
},
"created_date": "2024-04-09T15:07:14.282177",
"last_update": "2024-04-10T10:06:36.928941"
}

Updating an agent
To update an agent, users can use the update command. This command facilitates the
modification of an agent, allowing users to update the agent's name, description, and
parameters.
> giza agents update --agent-id 1 --name "Agent one updated" --description

{
"id": 1,
"name": "Agent one updated",
"description": "Agent to handle liquidity pools updated",
"parameters": {
"model_id": 1,
"version_id": 1,
"endpoint_id": 1,
"chain": "ethereum:mainnet:geth",
"account": "awesome_account",
},
"created_date": "2024-04-10T09:51:04.226448",
"last_update": "2024-04-10T10:37:28.285500"
}

The parameters can be updated using the --parameters flag, which allows users to
specify the parameters to be updated.

> giza agents update --agent-id 1 --parameters chain=ethereum:mainnet:geth

The --parameters flag can be used multiple times to update multiple parameters and
expects a key-value pair separated by an equal sign, parameter_key=parameter_value .

Delete an agent
For deleting an Agent, users can use the delete command. This command will erase
any related data to the agent.

> giza agents delete --agent-id 1

[giza][2024-04-10 12:40:33.959] Deleting agent 1 ✅


[giza][2024-04-10 12:40:34.078] Agent 1 deleted ✅
More Information
For more information about agents, and their usage in AI Actions, please refer to the
Agents documentation.
Prove
Giza provides two methods for proving Orion Cairo programs: through the CLI or directly
after running inference on the Giza Platform. Below are detailed instructions for both
methods.

Option 1: Prove a Model After Running Inference

Deploying Your Model


After deploying an endpoint of your model on the Giza Platform, you will
receive a URL for your deployed model. Refer to the Endpoints section for
more details on deploying endpoints.

Running Inference
To run inference, use the /cairo_run endpoint of your deployed model's
URL. For example:

https://deployment-gizabrain-38-1-53427f44-dagsgas-ew.a.run.app/

This action will execute the inference, generate Trace and Memory files on
the platform, and initiate a proving job. The inference process will return the
output result along with a request ID.

Checking Proof Status


To check the status of your proof, use the following command:

giza endpoints get-proof --model-id <MODEL_ID> --version-id <VER

Downloading Your Proof


Once the proof is ready, you can download it using:

giza endpoints download-proof --model-id <MODEL_ID> --version-id

Option 2: Proving a Model Directly from the CLI

Alternatively, you can prove a model directly using the CLI without deploying
the model for inference. This method requires providing Trace and Memory
files, which can only be obtained by running CairoVM in proof mode.

Running the Prove Command


Execute the following command to prove your model:

giza prove --trace <TRACE_PATH> --memory <MEMORY_PATH> --output-

This option is not recommended because of the need to deal with CairoVM. If
you opt for this method, ensure you use the following commit of CairoVM:
1a78237 .

Job Size
When generating a proof we can choose the size of the underlying job:

Size CPU (vCPU) Memory (GB)

S 4 16

M 8 32

L 16 64

XL 30 120
Verify
After successfully creating a proof for your Orion Cairo model, the next step is to verify
its validity.

To verify a proof use the following command:

giza verify --model-id <MODEL_ID> --version-id <VERSION_ID> --proof-id <PR

Upon successful submission, you will see a confirmation message indicating that the
verification procces has begun. Once the verification process is complete, a success
message will confirm the validity of the proof:

[giza][2024-04-22 13:48:12.236] Verifying proof...


[giza][2024-04-22 13:48:14.985] Verification result: True
[giza][2024-04-22 13:48:14.985] Verification time: 2.363822541
Model Compatibility

Frameworks Eligible for Transpilation


We are actively working to support more frameworks.

Frameworks Supported Additional note

ONNX ✅ Please check the ONNX operators we support.

You must serialize your model


XGBoost before sending it to the transpiler.
(serialized ✅
model) You can also use the MCR, to reduce the complexity of
your model.

You must serialize your model


LightGBM before sending it to the transpiler.
(serialized ✅
model) You can also use the MCR, to reduce the complexity of
your model.

Supported ONNX Operators


We are actively working to support more operators.

Operator Implemented

Abs ✅

Acos ✅

Acosh ✅
Operator Implemented

Add ✅

And ✅

Asin ✅

Asinh ✅

Atan ✅

ArgMax ✅

ArgMin ✅

Cast ✅

Concat ✅

Constant ✅

Div ✅

Equal ✅

GatherElements ✅

Gather ✅

Gemm ✅

LinearClassifier ✅

LinearRegressor ✅

Less ✅

LessOrEqual ✅

MatMul ✅
Operator Implemented

Mul ✅

ReduceSum ✅

Relu ✅

Reshape ✅

Sigmoid ✅

Slice ✅

Softmax ✅

Squeeze ✅

Sub ✅

Unsqueeze
Known Limitations
Transpilation is failing due to memory
This can happen for two reasons:

The provided model is so big that the transpilation runs out of memory. When the
model is transpiled, due to a Cairo limitation we need to embed the data as a Cairo
file, these files are much bigger than a binary (like onnx) which means that we won't
be able to run them as they will consume all our memory when running the Cairo
code.
When we have a fully compatible model, we compile it on the users behalf to generate
the sierra file in order to deploy it later, this compilation uses a lot of memory and can
lead Out of Memory error (OOM), which will also mean that we won't be able to run it

We suggest to review you model architecture and simplify it (number of layers,


neurons...). We encourage the usage of ZKBoost alongside with our Model Complexity
Reducer (MCR).

Transpilation is failing
When a transpilation fails the logs of the transpilation are returned to give more
information about what its happening. If there is an unhandled error please reach a
developer and provide the logs.

Pendulum installation is failing


Make sure that you are using Python 3.11, giza-cli has not been tested with versions
above 3.11 and 3.11 is the minimum requirement. Take a look at Installation

Proving Job failed


The most common cause is due to running out of memory. Creating a proof takes a lot of
memory due to the complexity of it, by default proving jobs are created using the M
size which uses 8 vCPU's and 32 GB of Ram, if you used the default size, try with L or
XL sizes. Here is a table with the computing resources for each:

Size CPU (vCPU) Memory (GB)

S 4 16

M 8 32

L 16 64

XL 30 120

When executing a job it says: "Daily job rate exceeded for


size: X"
There is a quota on the number of jobs that we can use daily in order to limit the usage
per day to prevent overuse. When the quota limit is reached you can try to use another
size if you need, we encourage to use the smaller possible as the quote is higher than for
L or XL sizes. Quotas are restarted at 00:00 UTC.

Size Quota

S 12

M 8

L 6

XL 4

When using an endpoint remember that a dry_run= argument exists in the predict
function so proof creation is not triggered every single time which is useful for
development.
Proving time is X but Job took much more time
When a proving job is executed we need to gather computing resources to launch the job.
For bigger sizes gathering these resources takes more time than smaller sizes, so the
time to just spin up the job can take up to 5 minutes for sizes L and XL where sizes
S and M should just take a couple of seconds to start.

predict raises a 503 Service Unavailable

When using the predict method of a GizaModel if this error is raise it can be for 2
reasons:

Out of Memory: when running the cairo code the service runs out of memory of the
service is killed, thus being "Unavailable". Try to delete the endpoint and create one
with a higher size, check Endpoints. If it persist, the model is to big to being run.
The shape of the input is not in the expected shape, this usually happens when we
pass a numpy array with a shape of (x, ) , make sure that we have both shapes and
not just one (x, 1) so it can be serialized by the service

Now there is a command giza endpoints logs --endpoint-id X to retrieve the logs
and help to identify which one of that happened. For the first, a log message of killed
should be shown and for the second a deserialization ERROR

Creating and endpoint returns


Maximum number of endpoints reached
There is a maximum quota of 4 endpoints per user, delete one and then create a new
one. Check #delete-an-endpoint

I'm getting a 500 Internal Server Error when executing


a command
This is an unexpected error. Please open an issue at
https://github.com/gizatechxyz/giza-cli .

Make sure to provide as much information as possible like:

Operating System
Python version
Giza CLI/Agents-sdk version
Giza command used
Python traceback
request_id if returned
Context

Actions SDK is returning an unhandled error


This is an unexpected error. Please open an issue at
https://github.com/gizatechxyz/actions-sdk

Make sure to provide as much information as possible like:

Operating System
Python version
Giza CLI/Agents-sdk version
Giza command used
Python traceback
request_id if returned
Context
AI Agents
Easy to use Verifiable AI and smart contracts interoperability.

Giza Agents is a framework for trust-minimized integration of machine learning into on-
chain strategy and action, featuring mechanisms for agentic memory and reflection that
improve performance over their lifecycle.

The extensible nature of Giza Agents allows developers to enshrine custom strategies
using ML and other algorithms, develop novel agent functionalities and manage
continuous iteration processes.

🌱 Where to start?
⏩ Quickstart 💡 Concepts

A quick resource to give you an initial idea of Everything you need to know about Agents
how AI Agents work.

📚 How-To Guides 🧑‍🎓 Tutorials

See how to achieve a specific goal using A great place to start if you’re a beginner.
Agents This section will help you gain the basic
skills you need to start using Agents.
Concepts
Thesis
Intelligence is the arbiter of power, culture and the means of life on earth.

Despite its pervasive influence, a precise definition of intelligence remains elusive. Most
scholars, however, emphasize the incorporation of learning and reasoning into action.
This focus on the integration of learning and action has largely defined the field of
Artificial Intelligence, with some early researchers explicitly framing the task of AI as the
creation of intelligent agents capable of “perceiving [their] environment through sensors
and acting upon that environment through effectors.”2

Yet the current environment in which these AI Agents operate is fragmented and almost
exclusively defined by private interest. These dynamics impede the potential for societal
value through collective experimentation and innovation. Such an approach requires an
open, persistent environment with self-enforcing standards and a shared medium for
communicating and transacting. Without such an environment, Agents will remain gated
and constrained private instruments, trapped in designated niches and narrow interest
groups.

Thankfully, Web3 has been building this vision of an immutable and shared digital
environment for nearly a decade. However, the value it has created is largely limited due
to several pressures:

risky, high-stakes user interactions


“skinny interfaces” dictated by smart contract constraints
learning curve reset with each new primitive

To transcend these pressures and the adoption bottleneck, Web3 needs a new interface
paradigm: one which prioritizes user preferences and abstracts the complexities involved
in interacting with intricate technical and financial logic.

We see Agents as an emerging intermediary layer capable of serving diverse functions,


user requirements and risk appetites. Agents can manage the inherent risks and
complexities of smart contract applications in a verifiable and traceable manner,
enabling automated risk management, smart assets, decentralized insurance and many
more context-sensitive use cases. One could even make the case that Agents are the
preferred user type for Web3 given their persistent uptime and capacity for broad data
analysis and highly specialized decision-making. Unlike legacy financial systems,
permissionless blockchains do not distinguish between humans and machines as
transacting entities. They also provide a radically open data ecosystem to monitor and
refine Agent behavior. Bringing Agent capabilities to Web3 would expand and enrich the
native utility of decentralized infrastructure, unlocking use cases that require adaptive
and context-aware behavior beyond what smart contracts alone can accommodate.

“Wallet-enabled agents can use any smart contract service or platform, from
infrastructure services to DeFi protocols to social networks, which opens a whole
universe of new capabilities and business models. An agent could pay for its own
resources as needed, whether it’s computation or information. It could trade tokens on
decentralized exchanges to access different services or leverage DeFi protocols to
optimize its financial operations. It could vote in DAOs, or charge tokens for its
functionality and trade information for money with other specialized agents. The result is
a vast, complex economy of specialized AI agents talking to each other over
decentralized messaging protocols and trading information onchain while covering the
necessary costs. It’s impossible to do this in the traditional financial system.” — Joel
Monegro, AI Belongs Onchain

Giza Agents: Future of Web3 Applications


The use cases of on-chain ML Agents are nothing short of a paradigm shift for Web3.
However, the computation required to direct Agent behavior is too intensive to be
handled directly on-chain. This integration requires a trust-minimized mechanism to
interoperate high-performance computation with decentralized infrastructures. ZK-
coprocessors have enabled this interoperability, providing significant scaling
improvements.

By adopting this design pattern for bridging ML to Web3, Giza is enabling scalable
integration of verified inferencing to on-chain applications. ML models are converted into
ZK circuits, enabling their predictions to be integrated with on-chain applications
conditional on proof verification. This allows for performant computation of ML models
off-chain and trust-minimized execution of on-chain applications.

Think off-chain, act on-chain.


Architecture
A framework for trust-minimized integration of machine learning into on-chain strategy
and action

Giza Agents makes use of four core modules:

Verifier: the gating function of the Agent, validating trust guarantees for Agent inputs
Intent Module: handles arbitrary strategies informed by ML predictions
Wallet: Agent-controlled wallet for on-chain transactions
Memory: Agent-specific dataset parsed through observing Agent actions and their
impact on the state

These modules are accompanied by an extensible app framework that allows developers
to serve arbitrary Agent functionality to other users of Giza Agents.
Agent
Agents are entities designed to assist users in interacting with Smart Contracts by
managing the proof verification of verifiable ML models and executing these contracts
using Ape's framework.

Agents are intermediaries between users and Smart Contracts, facilitating seamless
interaction with verifiable ML models and executing associated contracts. They handle
the verification of proofs, ensuring the integrity and authenticity of data used in contract
execution.

With an Agent, we can easily use a prediction of a verifiable model and using our custom
logic execute a smart contract.

An agent is easily created with:

giza agents create --model-id <model-id> --version-id <version-id> --


name <agent name> --description <agent description>

# or if you have the endpoint-id

giza agents create --endpoint-id <endpoint-id> --name <agent name> --


description <agent description>

For more information on CLI capabilities with agents check the CLI documentation for
agents.
Account
An account is a wallet for the chain that we want to interact with. For now, an Agent can
be used in any number of contracts within the same account.

This account is an Ape account so anything that you could do like importing or creating
an account with Ape's framework is available.

The account is needed to sign the transactions of the contracts without human
intervention, as this account is created in an encrypted keyfile, you must have created it
with a passphrase. This passphrase is used to unlock this account.

This passphrase should be in the environment of the execution so the agent can retrieve
it and unlock the account. The variable should be exported as <ACCOUNT>_PASSPHRASE
variable. It is encouraged to have it as an environment variable so it is not committed to
the code, which can be easily forgotten and published in a public repository.
Contracts
A contract refers to a smart contract deployed on the chain of choice. This contract
access and signing will be handled by the Agent.

The Agent auto-signs all the transactions required by the specified contracts.
How-To-Guides
You'll find various guides to using and create AI Agents.

Create an Account (Wallet) Specify and Use Contracts

Agent Usage
Create an Account (Wallet)
Before we can create an AI Agent, we need to create an account using Ape's framework.
We can do this by running the following command:

$ ape accounts generate <account name>


Enhance the security of your account by adding additional random input:
Show mnemonic? [Y/n]: n
Create Passphrase to encrypt account:
Repeat for confirmation:
SUCCESS: A new account '0x766867bB2E3E1A6E6245F4930b47E9aF54cEba0C' with H

This will create a new account under $HOME/.ape/accounts using the keyfile structure
from the eth-keyfile library , for more information on the account management, you can
refer to the Ape's framework documentation.

In this account generation we will be prompted to enter a passphrase to encrypt the


account, this passphrase will be used to unlock the account when needed, so make sure
to keep it safe.

We encourage the creation of a new account for each agent, as it will allow you to
manage the agent's permissions and access control more effectively, but importing
accounts is also possible.

Funding the account


Before we can create an AI Agent, we need to fund the account with some ETH. You can
do this by sending some ETH to the account address generated in the previous step.

For development we encourage the usage of testnets and thus the funding via faucets.

When is this account used


This account is neccesary prior to the creation of the agent using the Giza CLI. With this
account we manage the execution of the smart contracts, if this smart contracts requires
you to sign the transaction, this account must have funds to pay for the gas.
Specify and Use Contracts
To specify the contracts to use we only need to declare an alias and the contract address
for it. Using this alias we allow for the execution of this contract using Python's dot
notation alias.func() .

This contract is specified to the agent using a dictionary:

contracts = {
"alias_one": "0x1234567",
"alias_two": "0x1234567",
}

agent = GizaAgent(
...,
contracts=contracts,
)

This way when we want to use a contract within an agent we can easily access it using
Python's dot notation, like the following example

with agent.execute() as contracts:


contracts.alias_one.<the contract function>(...)
contracts.alias_two.<the contract function>(...)

It is mandatory to use the contracts inside the with statement of the execute()
command so contracts are available and loaded.

The contracts must be deployed to be of use.

Also, you are responsible for handling what function to use and the parameters that it
needs, as we only provide the abstraction of the execution.
Agent Usage
An Agent is an extension of a GizaModel, it offers the same capabillities with extra
functionalities like, account and contracts handling.

The agent needs the following data to work:

id : model identifier.
version_id : version identifier.
chain : chain where the smart contracts are deployed, to check what chains are
available you can run àpe networks list̀.
contracts : this is the definition of the contracts as a dictionary, more here.
account : this is an optional field indicating the account to use, if it is not provided
we will use the account specified at creation but otherwise we will use the specified
one if it exists locally.

from giza.agents import GizaAgent

agent = GizaAgent(
contracts=<contracts-dict>,
id=<model-id>,
version_id=<version-id>,
chain=<chain>,
account=<account>,
)

There is another way to instantiate an agent using the agent-id received on the
creation request with the CLI:

from giza.agents import GizaAgent

agent = GizaAgent.from_id(
id=<agent-id>,
contracts=<contracts-dict>,
chain=<chain>,
account=<account>,
)
Generate a verifiable prediction
This Agent brings an improved predict function which encapsulates the result in an
AgentResult class. This is the class responsible for handling and checking the status of
the proof related to the prediction, when the value is accessed, the execution is blocked
until the proof is generated and verified. It looks the same as the GizaModel.predict :

result = agent.predict(
input_feed=<input-data>,
verifiable=True,
)

The verifiable argument should be True to force the proof creation, it is kept here
for compatibility with GizaModel .

Access the prediction


To access the prediction you can easily do so by using the value property of the result:

result = agent.predict(
input_feed=<input-data>,
verifiable=True,
)

# This will access the value, if not verified it waits until verification
result.value

Modify polling options


When using the value at AgentResult, there are some default options used when polling
for the result of the proof and the maximum timeout for the proof job. The defaults are
600 seconds for timeout and 10 seconds for poll_interval .
result = agent.predict(
input_feed=<input-data>,
verifiable=True,
timeout=<timeout in seconds>,
poll_interval=<poll interval in seconds>,
)

Dry Run a prediction


When we are developing the solution is possible that we don't need to wait for the proof
to be generated as we are iterating a solution. For this there is a dry_run option which
gets the prediction from the verifiable model but does not launch nor wait for the proof
to be generated, improving development times and developer experience.

result = agent.predict(
input_feed=<input-data>,
verifiable=True,
dry_run=True,
)

Execute Contracts
The Agent handles the execution and handling of the contracts that have been specified
in the contracts attribute. Under the hood, the agent gets the information about the
contract and allows for the execution of the contract in an easy way.

These contracts must be executed inside the execute() context, which handles the
contract's instantiation as well as the signing of the contracts:
from giza.agents import GizaAgent

contracts_dict = {
"lp": ...,
}

agent = GizaAgent.from_id(
id=<agent-id>,
contracts=<contracts-dict>,
chain=<chain>,
account=<account>,
)

result = agent.predict(
input_feed=<input-data>,
verifiable=True,
)

with agent.execute() as contracts:


contracts.lp.foo(result.value)

In the executethe contract is accessed as contracts.lp and executes the function


foo of the contract. As it is using the value of the result , the contract function
won't be executed until the result is verified or an error will be raised.
Known Limitations
Agent is returning No node found when executing the
contract
When using Agents, we rely in Ape and they try to find any available public RPC node to
execute the contract. This is not ideal as public RPC nodes have really limited quotas. For
this we encourage the use of a private RPC like alchemy. This can be used in an agent the
following way:

agent = GizaAgent(
id=MODEL_ID,
version_id=VERSION_ID,
chain=f"arbitrum:mainnet:{PRIVATE_RPC}",
account=ACCOUNT,
contracts={
...
}
)

Where PRIVATE_RPC is the url provided for the RPC node.


Datasets
Giza Datasets SDK is a robust library designed to streamline the integration and
exploration of blockchain data for Machine Learning (ML) applications, particularly in the
realms of Zero-Knowledge Machine Learning (ZKML) and blockchain analysis.

Effortlessly access a rich array of blockchain datasets with a single line of code, and
prepare your dataset for use in sophisticated ML models. Built on a foundation that
supports efficient handling of large-scale data, our SDK ensures optimal performance
with minimal memory overhead, enabling seamless processing of extensive blockchain
datasets. Additionally, our SDK is deeply integrated with the Giza Datasets Hub, offering
a straightforward platform for both accessing a diverse range of datasets enriching the
ML and blockchain community's resources.

Join us in advancing the frontier of blockchain-based ML solutions with the Giza Datasets
SDK, where innovation meets practicality.

🌱 Where to start?
⏩ Quickstart 📚 How-To Guides

If you are already familiar with similar Dive into the fundamentals and acquaint
libraries, here is a fast guide to get you yourself with the process of loading,
started. accessing, and processing datasets. This is
your starting point if you are exploring for
the first time.

🏛️ Hub
Your one-stop destination for a wide range
of blockchain datasets. Dive into this rich
repository to find, share, and contribute
datasets, enhancing your ML projects with
the best in blockchain data.
How-To-Guides
Welcome to the Datasets SDK how-to-guides! These beginner-friendly guides will take
you through the basics of utilizing the Datasets SDK. You'll learn to load and prepare
blockchain datasets for training with your preferred machine learning framework. These
tutorials cover loading various dataset configurations and splits, exploring your dataset's
contents, preprocessing, and sharing datasets with the community.

Basic knowledge of Python and familiarity with a machine learning framework like
PyTorch or TensorFlow is assumed. If you're already comfortable with these, you might
want to jump into the quickstart for a glimpse of what Giza Datasets SDK offers.

Remember, these guides focus on fundamental skills for using the Giza Datasets SDK .
There's a wealth of other functionalities and applications to explore beyond these guides.

Ready to begin? Let's dive in! 🚀


DatasetsHub
A key aim of the Giza Datasets SDK is to simplify the process of searching the existing
collection of datasets of various purposes, formats and sources. The most straightforward
way to start is to use the DatasetsHub, the search and query feature for the Giza Datasets
library. Using the DatasetsHub, you can search through the datasets within your ML
development environment.

Dataset Object
Before using the DatasetsHub , it's useful to first understand Datasets themselves.
Datasets in giza.datasets are represented as Dataset Class, which include details about a
dataset such as the dataset's name, description, link to its documentation, tags, etc. You
can query information about a given dataset with DatasetsHub

DatasetHub
The DatasetsHub class provides methods to manage and access datasets within the
Giza Datasets library. Before we delve deeper into various methods, lets import the
DatasetsHub and instantiate a DatasetsHub object.

from giza.datasets import DatasetsHub


hub = DatasetsHub()

Now we can call different DatasetsHub methods.

Use the show() method to print a table of all datasets in the hub:

hub.show()

Use the list() method to get a list of all datasets in the hub:

datasets = hub.list()
print(datasets)
Use the get() method to get a Dataset object with a given name:

dataset = hub.get('tvl-fee-per-protocol')

Use the describe() method to print a table of details for a given dataset:

hub.describe('tvl-fee-per-protocol')

Use the list_tags() method to print a list of all tags in the hub.

hub.list_tags()

Use the get_by_tag() method to a list of Dataset objects with the given tag.

hub.get_by_tag('Liquidity')

Great! Now we can use DatasetLoader to load our selected datasets.


DatasetsLoader
The core of our SDK is the integration of Python's Polars library, chosen for its efficiency
in handling large datasets. Polars enables quick data processing and manipulation, which
is vital for data analysis and machine learning. Our DatasetsLoader, built on Polars,
offers an easy-to-use solution for loading various datasets, making the process smoother
and more efficient for data-driven projects.

DatasetsLoader
Locating reliable, easily reproducible datasets can often be a challenge. A key aim of the
Giza Datasets SDK is to simplify the process of accessing datasets of various formats and
types. The most straightforward way to start is to explore the Dataset Library or use the
DatasetsHub.

Assuming that we have already know the name of the dataset we want to load, we can
now use the DatasetLoader to load it.

from giza.datasets import DatasetsLoader

# Instantiate the DatasetsLoader object


loader = DatasetsLoader()

By default, DatasetsLoader has the use_cache option enabled to improve the loading
performance of our datasets. If you want to disable it, add the following parameter when
initializing your class:

loader = DatasetsLoader(use_cache = False)

If you want to learn more about cache management, visit the Cache management
section.

Depending on your device's configuration, it may be necessary to provide SSL


certificates to verify the authenticity of HTTPS connections. You can ensure that all these
certifications are correct by executing the following line of code:
import certifi
import os

os.environ['SSL_CERT_FILE'] = certifi.where()

Once we have our datasetsLoader class created and our certificates correct, we are
ready to load one of our datasets.

df = loader.load('yearn-individual-deposits')

df.head()

shape: (5, 7)

evt_block_numb token_contract_
evt_block_time vaults token_symbol
er address

datetime[ns
i64 str str str
]

2023-06- "0x3b27f92 "0xdac17f95


17427717 "USDT"
07 09:50:35 c0e21… 8d2e…

2022-08- "0x3b27f92 "0xdac17f95


15409462 "USDT"
25 13:53:28 c0e21… 8d2e…

2022-08- "0x3b27f92 "0xdac17f95


15407745 "USDT"
25 07:13:02 c0e21… 8d2e…

2022-11- "0x3b27f92 "0xdac17f95


16001443 "USDT"
19 03:41:35 c0e21… 8d2e…

2022-12- "0x3b27f92 "0xdac17f95


16299403 "USDT"
30 18:34:11 c0e21… 8d2e…

Keep in mind that giza-datasets uses Polars (and not Pandas) as the underlying
DataFrame library.
In addition, if we have the option use_cache = True (default option), the load method
allows us to load our data in eager mode. With this mode, we will obtain several
advantages both in memory and time:

df = loader.load('yearn-individual-deposits', eager = True)

For more detailed information on the advantages and use of this mode, visit our Eager
mode section.

Success! We can now use the loaded dataset for ML development.


Eager mode
We understand the challenges that come with handling and analysing large volumes of
data. That's why we created the eager mode. A lazy execution mode for optimising your
data analysis workflow.

Why eager mode


Leveraging lazy execution can significantly enhance your data analysis workflow. By
deferring computations until necessary, it enables you to build complex data
transformation pipelines without incurring the performance penalty of executing each
operation immediately. This approach not only minimises memory usage by avoiding the
creation of intermediate data structures but also allows us to optimise the entire
computation graph, selecting the most efficient execution strategy. Whether you're
dealing with large datasets or need to streamline your data processing tasks, Eager
mode ensures that your operations are both fast and resource-efficient, making it a
powerful tool in your data analysis arsenal.

How to use it
loader = DatasetsLoader()
df = loader.load("tokens-daily-prices-mcap-volume", eager = True)

df_filtered = df.drop("market_cap").filter(token = "WETH").limit(3)


df_filtered.collect()

After using the collect() method, the result is loaded into memory. Before executing
the collect method, you can add as many operations as you want. Here is the result of the
above code snipet:

date price volumes_last_24h token

2018-02-14 839.535 54776.5 WETH

2018-02-15 947.358 111096.0 WETH


date price volumes_last_24h token

2018-02-16 886.961 57731.7 WETH


Cache management
In the world of data analysis and processing, efficiency and speed are paramount. This is
where caching comes into play. Caching is a powerful technique used to temporarily
store copies of data, making future requests for that data faster and reducing the need to
repeat expensive operations. It's particularly beneficial when dealing with large datasets,
frequent queries, or complex computations that are resource-intensive to compute
repeatedly.

Throughout this tutorial, we'll explore the key functionalities Giza offers for cache
management.

How it works
The default cache directory is ~/.cache/giza_datasets

when you are creating your DatasetLoader object, you have the option to modify the
path of your cache:

from giza.datasets import DatasetsLoader


loader = DatasetsLoader(cache_dir= "./")

or disable it:

loader = DatasetsLoader(use_cache = False)

and it would be done! With this simple configuration Giza takes care of downloading,
saving and uploading the necessary data efficiently.

Finally, if you want to clear the cache, you can run the following command:

loader.clear_cache() # 1 datasets have been cleared from the cache directo


Hub
Aggregated datasets
TVL & fees per protocol

Description
This dataset provides daily information on various DeFi protocols, encompassing data
from 35 protocols across 5 categories. The DataFrame includes fields such as chain ,
date , totalLiquidityUSD , fees , category , and project . The primary key for this
dataset is a combination of chain , date , and project . The categories covered in this
dataset are as follows:

Liquid Staking: lido , rocket-pool , binance-staked-eth , mantle-staked-eth ,


frax-ether .

Dexes: uniswap-v3 , curve-dex , uniswap-v2 , pancakeswap-amm , balancer-v2 ,


pancakeswap-amm-v3 , sushiswap and thorchain

Yield: convex-finance, stakestone , aura , pendle , coinwind , penpie

Lending: aave-v3 , aave-v2 , spark , compound-v3 , compound-v2 ,


morpho-aave , morpho-aavev3 , benqi-lending , radiant-v2

Yield Aggregator: yearn-finance , beefy , origin-ether , flamincome ,


sommelier

Collection method
The information was obtained from Defillama API. Subsequently, a manual preprocessing
was performed to filter out the protocols based on their TVL and the availability of
sufficient historical data, not only from TVL, but also from other fields that can be found
in other datasets provided by Giza (TVL for each token by protocol, Top pools APY per
protocol and Tokens OHCL price).

Schema
chain : The blockchain network where the protocol is deployed. In some cases, this
feature not only specifies the blockchain network but also includes certain suffixes
like "staking" or "borrowed" for some protocols. These suffixes provide deeper
insights into the specific nature of the protocol's operations on that blockchain.
date : The date of the data snapshot, recorded daily.
totalLiquidityUSD : Total value locked in USD.
fees : Fees generated by the protocol.
category : The category of the protocol (e.g., Liquid Staking, Dexes).
project : The specific DeFi project or protocol name.

The primary key consists of a combination of chain , date , and project , ensuring
each row provides a unique snapshot of a project's daily performance.

Potential Use Cases


Market Analysis: Assessing the market share and growth of different DeFi categories
and projects.
Trend Identification: Spotting trends in liquidity and fee generation across various
blockchains.
Data Integration: Merging with other detailed datasets for each primary key to gain
deeper insights into individual protocols.
Investment Decision Making: Assisting investors and analysts in making informed
decisions based on liquidity trends and project performance.

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('tvl-fee-per-protocol')

df.head()
totalLiquidityUS
chain date fees category
D

"Liquid
"ethereum" 2020-12-20 2.6976e6 0
Staking"

"Liquid
"ethereum" 2020-12-21 1.2120e7 0
Staking"

"Liquid
"ethereum" 2020-12-21 1.1057e8 0
Staking"

"Liquid
"ethereum" 2020-12-21 1.2109e8 0
Staking"

"Liquid
"ethereum" 2020-12-21 2.2668e8 0
Staking"
Tokens OHLC price

Description
This dataset contains each 4 days historical price data for various cryptocurrencies,
sourced from the CoinGecko API. It includes data fields such as Open , High , Low ,
Close , and token The dataset is structured with date and token as its primary key,
enabling chronological analysis of cryptocurrency price movements. This dataset can be
merged with the Top pools APY per protocol dataset for combined financial analysis.

Collection method
Data is retrieved each 4 days from the CoinGecko API, which provides historical and
current price data for a range of cryptocurrencies. The API is utilized to gather opening,
highest, lowest, and closing prices.

Schema
date : The date for the price data.
Open : Opening price of the cryptocurrency on the given date.
High : Highest price of the cryptocurrency on the given date.
Low : Lowest price of the cryptocurrency on the given date.
Close : Closing price of the cryptocurrency on the given date.
token : Identifier of the cryptocurrency.

The dataset uses date and token as the primary key.

Potential Use Cases


Price Trend Analysis: Analyzing the historical price trends of cryptocurrencies.
Combining with Yield Data: Merging this price data with the "APY top pools per
project" dataset for a comprehensive financial analysis of cryptocurrency market
trends and DeFi yields.
Investment Analysis: Utilizing historical price data to inform cryptocurrency
investment and trading strategies.
Cryptocurrency Market Research: Providing a dataset for researchers to examine
market behaviors and price fluctuations in cryptocurrencies.

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('tokens-ohcl')

df.head()

date Open High Low Close

2019-02-03 3438.3604 3472.2433 3438.3604 3461.0583

2019-02-07 3468.16 3486.4073 3425.8603 3425.8603

2019-02-11 3387.7629 3770.3402 3387.7629 3770.3402

2019-02-15 3605.9233 3652.9015 3605.5237 3605.5237

2019-02-19 3613.8624 3831.385 3613.8624 3831.385


Tokens daily information

Description
This dataset contains daily historical price, mcap and 24h volumes data for various
cryptocurrencies, sourced from the CoinGecko API. It includes data fields such as price
, market_cap , volumes_last_24h and token for each day. The dataset is structured
with date and token as its primary key, enabling chronological analysis of
cryptocurrency price movements. This dataset can be merged with the Top pools APY
per protocol dataset for combined financial analysis.

Collection method
Data is retrieved daily from the CoinGecko API, which provides historical and current
price data for a range of cryptocurrencies.

Schema
date : The date for the price data.
price : refers to the value of a cryptocurrency in USD at a given time, determined by
supply and demand on exchange markets.
market_cap : is the total market value of a cryptocurrency, calculated by multiplying
the current price of the cryptocurrency by its total circulating amount.
volumes_last_24h : represents the total amount of a cryptocurrency bought and
sold across all exchange platforms in the last 24 hours, reflecting the
cryptocurrency's liquidity and trading activity.
token : Identifier of the cryptocurrency.

The dataset uses date and token as the primary key.


Potential Use Cases
Risk Management: Leveraging market cap and volume data to assess the volatility
and risk profile of different cryptocurrencies for portfolio management.
Algorithmic Trading: Developing trading algorithms that use real-time price and
volume data to execute trades based on predefined criteria.
Market Sentiment Analysis: Correlating price movements and trading volumes with
news articles or social media sentiment to gauge the market's mood towards specific
cryptocurrencies.
Liquidity Assessment: Using 24-hour trading volume data to evaluate the liquidity of
cryptocurrencies, aiding in decision-making for large transactions.

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('tokens-daily-prices-mcap-volume')

df.tail()

volumes_last_24
date price market_cap token
h

2024-02-01 0.316158 267940000 19854000 "ZRX"

2024-02-02 0.319726 271040000 12309000 "ZRX"

2024-02-03 0.322348 273200000 9907600 "ZRX"

2024-02-04 0.317486 269170000 7345900 "ZRX"

2024-02-05 0.3116971 263910000 6659800 "ZRX"


Top pools APY per protocol

Description
This dataset, sourced daily from Defillama, showcases detailed information about the top
20 pools by TVL (Total Value Locked) in the same set of protocols as covered in the "TVL
& fees protocol" dataset. It includes comprehensive data on date , tvlUsd , apy ,
apyBase , project , underlying_token , and chain . The primary key of this dataset
is composed of date , project , and chain , making it compatible for integration with
the TVL & fees protocol dataset for combined analyses.

Collection method
Data is gathered daily from Defillama, focusing on the highest TVL pools across various
DeFi projects. The collection involves identifying the top pools based on their TVL and
then compiling detailed information about their yield (APY), underlying tokens, and other
pertinent details.

Schema
date : The date of the data snapshot.
tvlUsd : Total value locked in USD.
apy : Annual Percentage Yield offered by the pool.
project : The name of the DeFi project or protocol.
underlying_token : The underlying tokens used in each pool.
chain : The blockchain network on which the pool operates.

The dataset uses a combination of date , project , and chain as its primary key.
Potential Use Cases
Yield Analysis: Understanding the APY trends and performance of top pools in
various DeFi protocols.
Market Research: Offering insights into the DeFi market's liquidity distribution and
yield opportunities for market researchers and analysts.
Investment Strategy Development: Assisting investors in identifying pools with
favorable yields and substantial liquidity for informed investment decision-making.

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('top-pools-apy-per-protocol')

df.tail()

underlying_toke
date tvlUsd apy project
n

"yearn-
2024-01-18 1520883 5.4928 "WETH"
finance"

"yearn-
2024-01-19 1534167 5.42961 "WETH"
finance"

"yearn-
2024-01-20 1524093 5.49112 "WETH"
finance"

"yearn-
2024-01-21 1491960 5.09787 "WETH"
finance"

"yearn-
2024-01-22 1440121 5.04939 "WETH"
finance"
TVL for each token by protocol

Description
This dataset provides a daily historical record of TVL (Total Value Locked) for each token
within various DeFi protocols, aligned with the protocols mentioned in the previous
datasets (TVL & fees per protocol, Top pools APY per protocol and Tokens OHCL price). It
is structured in partitions, with each partition representing a different protocol. The
columns within each partition are named after the tokens supported by that protocol,
and a token is included only if it has historical data dating back to at least August 1,
2022. The dataset includes a date column for the daily data entries and focuses on the
TVL of each token within a protocol, as opposed to the aggregate TVL of the entire
protocol.

Collection method
The data for this dataset is collected from Defillama, a reputable source for DeFi data.
The collection process involves:

Identifying protocols that have significant TVL and historical data presence.
Fetching daily TVL data for each token within these protocols.
Ensuring that tokens with data dating back to at least August 1, 2022, are included.
Organizing the data into partitions by protocol, with columns for each supported
token.
Regularly updating the dataset to reflect the latest TVL figures.

This methodical approach ensures that the dataset is comprehensive and provides a
granular view of the TVL across different tokens in the DeFi space.

Schema
partitions : Organized by protocol.
one_feature_per_supported_tokens : A different column for each of the tokens
supported by the protocol. The column name will be the name of the token. Each row
will represent the TVL of that protocol in that token.
date : Column indicating the date of each TVL entry.

The dataset only have date as the primary key.

Potential Use Cases


Detailed TVL Tracking: Examining the TVL distribution among various tokens within a
protocol.
Trend Analysis: Identifying trends in token popularity and liquidity within protocols.
Comprehensive Market Analysis: Merging this dataset with earlier datasets for a
holistic view of the DeFi ecosystem, spanning from overall protocol performance to
individual token dynamics.

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('tvl-per-project-tokens/project=lido')

df.tail()

date project LUNC MATIC SOL

2024-01-18 1520883 null 1.0968e8 2.7014e7

2024-01-19 1534167 null 1.0574e8 2.6493e7


date project LUNC MATIC SOL

2024-01-20 1524093 null 1.1005e8 2.6495e7

2024-01-21 1491960 null 1.1049e8 2.6054e7

2024 01 22 1440121 null 1 0284e8 2 1498e7


Aave
Daily Deposits & Borrows v2

Description
This dataset provides the aggregated daily borrows and deposits made to the Aave v2
Lending Pools. Only the pools in Ethereum L1 are taken into account and the
contract_address feature can be used as an unique identifier for the individual pools. The
dataset contains all the pool data from 25.01.2023 to 25.01.2024, and individual rows
are omitted if there were no borrows or deposits made in a given day.

Schema
day - date
symbol - token symbols of the lending pool
contract_address - contract address of the lending pool
deposits_volume - aggregated volume of all deposits made in that day, converted to
USD
borrows_volume - aggregated volume of all borrows made in that day, converted to
USD

The dataset uses day and contract_address as the primary key.

Potential Use Cases


pool specific borrow & lending prediction
borrow rate elasticity analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-daily-deposits-borrowsv2')
Daily Deposits & Borrows v3

Description
This dataset provides the aggregated daily borrows and deposits made to the Aave v3
Lending Pools. Only the pools in Ethereum L1 are taken into account and the
contract_address feature can be used as an unique identifier for the individual pools. The
dataset contains all the pool data from 25.01.2023 to 25.01.2024, and individual rows
are omitted if there were no borrows or deposits made in a given day.

Schema
day - date
symbol - token symbols of the lending pool
contract_address - contract address of the lending pool
deposits_volume - aggregated volume of all deposits made in that day, converted to
USD
borrows_volume - aggregated volume of all borrows made in that day, converted to
USD

The dataset uses day and contract_address as the primary key.

Potential Use Cases


pool specific borrow & lending prediction
borrow rate elasticity analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-daily-deposits-borrowsv3')
Daily Exchange Rates & Indexes v3

Description
This dataset provides the average borrowing rates (variable & stable), supply rate, and
liquidity indexes of Aave v2 Lending Pools. Only the pools in Ethereum L1 are considered,
and the contract_address feature can be used as a unique identifier for the individual
pools. The dataset contains all the pool data from 25.01.2023 to 25.01.2024, and
individual rows are omitted if there were borrows executed on the pool.

Schema
day - date
symbol - token symbols of the lending pool
contract_address - contract address of the lending pool
avg_stableBorrowRate - daily average of the stable borrow rate for a given token
avg_variableBorrowRate - daily average of the variable borrow rate for a given token
avg_supplyRate - daily average supply rate for the given pool
avg_liquidityIndex - interest cumulated by the reserve during the time interval since
the last updated timestamp
avg_variableBorrowIndex - variable borrow index of the aave pool

The dataset uses day and contract_address as the primary key.

Potential Use Cases


supply rate prediction / analysis
borrow rate elasticity analysis
Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-daily-rates-indexes')
Liquidations v2

Description
This dataset contains all the individual liquidations of borrow positions in the Aave v2
Protocol. Only the liquidations in Ethereum L1 are shown and the dataset contains all the
liquidation data from inception to 05.02.2024.

For more information on liquidations in Aave Protocol, check out this resource:
https://docs.aave.com/faq/liquidations

Schema
day - date
liquidator - the contract address of the liquidater of the borrow position
user - the contract address of the owner of the borrow position
token_col - symbol of the token used for collateral
token_debt - symbol of the token used for debt
col_contract_address - contract address of the token used for collateral
collateral_amount - collateral amount, in tokens
col_value_USD - collateral amount, converted into USD using the avg USD-Token
price of the day of the liquidation
col_current_value_USD - collateral amount, converted into USD using the USD-Token
price at the time of the dataset creation (05.02.2024)
debt_contract_address - contract address of the token used for debt
debt_amount - debt amount, in tokens
debt_amount_USD - debt amount, converted into USD using the avg USD-Token price
of the day of the liquidation
debt_current_amount_USD - debt amount, converted into USD using the USD-Token
price at the time of the dataset creation (05.02.2024)
Potential Use Cases
liquidation prediction/forecasting
LTV optimization

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-liquidationsV2')
Liquidations v3

Description
This dataset contains all the individual liquidations of borrow positions in the Aave v3
Protocol. Only the liquidations in Ethereum L1 are shown and the dataset contains all the
liquidation data from inception to 05.02.2024.

For more information on liquidations in Aave Protocol, check out this resource:
https://docs.aave.com/faq/liquidations

Schema
day - date
liquidator - the contract address of the liquidater of the borrow position
user - the contract address of the owner of the borrow position
token_col - symbol of the token used for collateral
token_debt - symbol of the token used for debt
col_contract_address - contract address of the token used for collateral
collateral_amount - collateral amount, in tokens
col_value_USD - collateral amount, converted into USD using the avg USD-Token
price of the day of the liquidation
col_current_value_USD - collateral amount, converted into USD using the USD-Token
price at the time of the dataset creation (05.02.2024)
debt_contract_address - contract address of the token used for debt
debt_amount - debt amount, in tokens
debt_amount_USD - debt amount, converted into USD using the avg USD-Token price
of the day of the liquidation
debt_current_amount_USD - debt amount, converted into USD using the USD-Token
price at the time of the dataset creation (05.02.2024)
Potential Use Cases
liquidation prediction/forecasting
LTV optimization

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-liquidationsV3')
Balancer
Daily Pool Liquidity

Description
This dataset provides the average daily available liquidity per token for the Balancer v1 &
v2 Liquidity Pools. Data from all the networks are taken into account and the pool_id
feature can be used as a unique identifier for the individual pools. The dataset contains
all the pool data from inception until 26.01.2024, and individual rows are omitted if there
were no trades executed in a given day.

Schema
day - date
pool_id - pool id of the balancer pool
blockchain - blockchain network of the given pool
pool_symbol - symbols of the token pairs (for weighted pools, its possible to have
more than 2 tokens exchanged in a pool)
token_symbol - symbol of the token with the given liquidity (for every day, each pool
has one row per token)
pool_liquidity - daily average amount of available tokens in the given pool
pool_liquidity_usd - daily average amount of available tokens in the given pool,
converted to USD values using average daily token-USD price of the given day

The dataset uses day , pool_id and token_symbol as the primary keys.

Potential Use Cases


pool specific borrow & lending prediction
borrow rate elasticity analysis
Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('balancer-daily-pool-liquidity')
Daily Swap Fees

Description
This dataset provides the daily average swap fees for the Balancer v1 & v2 Liquidity
Pools. Data from all the networks are taken into account and the contract_address
feature can be used as an unique identifier for the individual pools. The dataset contains
all the pool data from inception until 26.01.2024, and individual rows are omitted if there
were no trades executed in a given day.

Schema
day - date
contract_address - contract address of the balancer pool
blockchain - blockchain network of the given pool
token_pair - symbols of the token pairs exchanged (for weighted pools, its possible to
have more than 2 tokens exchanged in a pool)
avg_swap_fee - average swap fee for the given pool on the given day

The dataset uses day and contract_address as the primary key.

Potential Use Cases


swap fee elasticity analysis on trade volumes
lender APR optimization

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('balancer-daily-swap-fees')
Daily Trade Volume

Description
This dataset provides the aggregated, daily trading volumes for the Balancer v1 & v2
Liquidity Pools. ata from all the networks are taken into account and the pool_id feature
can be used as an unique identifier for the individual pools. The dataset contains all the
pool data from inception until 26.01.2024, and individual rows are omitted if there were
no trades executed in a given day.

Schema
day - date
pool_id - pool id of the balancer pool
blockchain - blockchain network of the given pool
token_pair - symbols of the token pairs exchanged (for weighted pools, its possible to
have more than 2 tokens exchanged in a pool)
trading_volume_usd - aggregated volume of token swaps executed on the given day,
converted into USD values in the time of the block execution

The dataset uses day and pool_id as the primary keys.

Potential Use Cases


pool trade volume forecasting
swap fee elasticity analysis on trade volumes
lender APR optimization

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('balancer-daily-trade-volume')
Curve
Daily Trade Volume & Fees

Description
This dataset provides the aggregate daily trade volume for all Curve DEX'es, as well as
daily fees & admin fees generated from these trades. Data is gathered for all DEX's from
their inception date, however individual rows are omitted if the trade volume is below 1
USD. For the pool_name column, the designated names do not always follow the
convention of "token A- token B", so it is advised to visit Curve's documentation to learn
more.

Schema
day - date
project_contract_address - contract address of the DEX
pool_name - name of the DEX (usually combination of the Tokens, but there are
exceptions with special names)
daily_volume_usd - daily trade volume in USD
admin_fee_usd - daily fees accrued by DEX, designated for the admins and
governance token holders
fee_usd - daily fees accrued by DEX, designated for LP providers
token_type - type of DEX (stable or volatile)

The dataset uses day and project_contract_address as the primary keys.

Potential Use Cases


pool specific trade volume prediction
automated LP strategies
Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('curve-daily-volume-fees')
PancakeSwap
Daily Trade Volume

Description
This dataset provides the daily trade volume per pool for all PancakeSwap protocols .
Data from all the networks and versions are taken into account, from 26.09.2023 until
26.03.2024. Individual rows are omitted if there were no trades executed in a given day
above the threshold of 1 USD.

Schema
day - date
token_pair - symbols of the token pairs
project_contract_address - contract address of the DEX
version - PancakeSwap protocol version
blockchain - network of the DEX contract
daily_volume_usd - daily trade volume in USD

The dataset uses day , project_contract_address as the primary keys.

Potential Use Cases


pool specific trade volume prediction
automated LP strategies

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('pancakeswap-daily-volume')
Uniswap
Uniswap V3 Liquidity Distribution

Description
This dataset contains the liquidity distribution of all Uniswap V3 pools on Ethereum
mainnet with TVL above $100k (as of the time of collection). Each file contains the
snapshots of liquidity distribution in 1000 block intervals as well as the current ticks at
the sampled blocks. Only 100 ticks above and below the current tick are stored. These
datasets can be used to reproduce the liquidity charts from the Uniswap analytics page.

Example of a liquidity distribution chart from Uniswap analytics page

Additionally, there is a standalone utility dataset with all of the pools' details (such as the
tokens within the pool, their decimals, or the fee charged by the pool).

Collection method
The initial list of top Uniswap V3 pools was fetched from the defillama API. Next, each
pool's details were fetched with onchain calls to the appropriate smart contracts. The
reconstruction of available liquidity over time was possible by listening to all the
historical mint and burn events emitted by the pools. Finally, the raw event data was
parsed into an easy-to-understand format.

Schema

Pools data
address: the address of the pool's smart contract
tick_spacing: tick spacing
token0: symbol of the first token in the pool
token1: symbol of the second token in the pool
decimals0: decimals of the first token in the pool
decimals1: decimals of the second token in the pool
fee: swap fee charged by the pool
token0_address: address of the first token
token1_address: address of the second token
chain: which chain the pool is deployed on (ethereum)
pool_type: what's the pool type (Uniswap V3)

Pool liquidity
block_number: block number of the snapshot
in_token: token to swap from
out_token: token to swap out to
price: price at a given tick (quoted in in_token/out_token)
amount: the amount of out_token available to be taken out of the tick
tick_id: tick id

Current ticks
current_tick: current tick of the pool
block_number: block number of the snapshot

Potential Use Cases


Liquidity Provision management
Price prediction

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()

liquidity_df = loader.load('uniswapv3-ethereum-usdc-weth-500-liquidity-sna
ticks_df = loader.load('uniswapv3-ethereum-usdc-weth-500-current-ticks')
pools_info_df = loader.load('uniswapv3-ethereum-pools-info')

liquidity_df.head()
ticks_df.head()
pools_info_df.head()
Compound
Compound V2 Interest Rates

Description
This dataset contains interest rates of all the markets on Compound V2 (ethereum
mainnet) since the protocol's inception. The interest rates are for both supplying and
borrowing. Additionally, users of this dataset can analyze the total supplied and borrowed
amounts in each market.

Collection method
The data was collected from the Compound V2 subgraph created by The Graph Protocol
(https://thegraph.com/hosted-service/subgraph/graphprotocol/compound-v2). The
queries were sent with a block parameter, corresponding to the current block at midnight
of each day within the dataset's timespan.

Schema
symbol - symbol of the receipt token (e.g. cETH)
totalBorrows - total amount of underlying tokens borrowed from the market
borrowRate - interest rate paid by the borrowers
totalSupply - total amount of receipt tokens issued by the market
supplyRate - interest rate paid by the suppliers
underlyingPriceUSD - USD value of the underlying token of a given market
exchangeRate - the exchange rate of receipt tokens (i.e cETH/ETH)
timestamp - unix timestamp of the snapshot
block_number - ethereum mainnet block number of the snapshot
totalSupplyUnderlying - total amount of underlying tokens supplied to the market
totalSupplyUSD - total USD value of the supplied tokens
totalBorrowUSD - total USD value of the tokens borrowed from the market

Potential Use Cases


Interest rate prediction

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('compound-daily-interest-rates')
Yearn
Individual Vault Deposits

Description
This dataset provides the individual token deposits made to the Yearn v2 Vaults. Only the
pools in Ethereum L1 are taken into account and the vaults feature can be used as an
unique identifier for the individual vaults. The dataset contains all the deposit data from
vault's inception. The value feature contains the token amount that has been correctly
divided by the decimal power of 10, to avoid overflows.

Schema
evt_block_time - datetime of the deposit execution
evt_block_number - block number of the deposit execution
vaults - contract address of the Yearn Vault
token_contract_address - contract address of the underlying token
token_symbol - symbol of the underlying token
token_decimals - decimals of the underlying token
value - deposit value in tokens (value is already decimalized using token_decimals)

Potential Use Cases


vault APR elasticity analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('yearn-individual-deposits')
Individual Vault Withdraws

Description
This dataset provides the individual token withdraws made to the Yearn v2 Vaults. Only
the pools in Ethereum L1 are taken into account and the vaults feature can be used as
an unique identifier for the individual vaults. The dataset contains all the deposit data
from vault's inception. The value feature contains the token amount that has been
correctly divided by the decimal power of 10, to avoid overflows.

Schema
evt_block_time - datetime of the deposit execution
evt_block_number - block number of the deposit execution
vaults - contract address of the Yearn Vault
token_contract_address - contract address of the underlying token
token_symbol - symbol of the underlying token
token_decimals - decimals of the underlying token
value - deposit value in tokens (value is already decimalized using token_decimals)

Potential Use Cases


vault APR elasticity analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('yearn-individual-withdraws')
Strategy Borrows

Description
This dataset provides the borrows made to the Yearn v2 Vaults by the associated
strategies. Only the pools in Ethereum L1 are taken into account and the vaults feature
can be used as an unique identifier for the individual vaults, while the
strategy_address feature can be used as an unique identifier for the strategies. The
dataset contains all the deposit data from vault's inception. The value feature contains
the token amount that has been correctly divided by the decimal power of 10, to avoid
overflows.

Schema
evt_block_time - datetime of the deposit execution
evt_block_number - block number of the deposit execution
vaults - contract address of the Yearn Vault
strategy_address - contract address of the strategy
token_contract_address - contract address of the underlying token
token_symbol - symbol of the underlying token
token_decimals - decimals of the underlying token
value - deposit value in tokens (value is already decimalized using token_decimals)

Potential Use Cases


vault APR optimization

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('yearn-strategy-borrows')
Strategy Returns

Description
This dataset provides the deposits made to the Yearn v2 Vaults by the associated
strategies. Only the pools in Ethereum L1 are taken into account and the vaults feature
can be used as an unique identifier for the individual vaults, while the
strategy_address feature can be used as an unique identifier for the strategies. The
dataset contains all the deposit data from vault's inception. The value feature contains
the token amount that has been correctly divided by the decimal power of 10, to avoid
overflows.

Schema
evt_block_time - datetime of the deposit execution
evt_block_number - block number of the deposit execution
vaults - contract address of the Yearn Vault
strategy_address - contract address of the strategy
token_contract_address - contract address of the underlying token
token_symbol - symbol of the underlying token
token_decimals - decimals of the underlying token
value - deposit value in tokens (value is already decimalized using token_decimals)

Potential Use Cases


vault APR optimization

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('yearn-strategy-returns')
Farcaster
Individual Reactions

Description
This dataset provides the individual reactions in the Farcaster Social Protocol that are
sent between 01.01.2024 and 04.01.2024. fid is the unique identifier in the Farcaster
protocol and can be used to connect users with their reactions. reaction_type is a
categorical variable, with 1 meaning the person liked the target cast, while 2 means
that the user reshared the cast within his/her audience.

Schema
id - id of the reaction
fid - fid of the reactor
created_at - timestamp of the reaction
target_fid - fid of the target cast
reaction_type - type of reaction, see the description

Potential Use Cases


Social Graph Analysis
Sentiment Analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('farcaster-reactions')
Individual Casts

Description
This dataset provides the individual casts in the Farcaster Social Protocol that are sent
between 01.01.2024 and 01.03.2024. fid is the unique identifier in the Farcaster
protocol and can be used to connect users with their casts.

Schema
created_at - timestamp of the link creation
id - id of the link creation
fid - fid of the link creator
hash - hash of the link execution
text - text content of the cast
embeds - embed content of the cast
mentions - mentioned content of the cast
parent_fid - fid of the parent cast
parent_url - url of the parent cast
parent_hash - hash of the parent cast
root_parent_url - url of the root cast
root_parent_hash - hash of the root cast
mentions_positions - positions of the mentions in the cast text

Potential Use Cases


Profanity Checker
Sentiment Analysis
Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('farcaster-casts')
Individual Links

Description
This dataset provides the individual links in the Farcaster Social Protocol that are created
between 01.01.2024 and 01.02.2024. fid is the unique identifier in the Farcaster
protocol and can be used to connect users with their link creators and link targets.

Schema
created_at - timestamp of the link creation
id - id of the link creation
fid - fid of the link creator
hash - hash of the link execution
type - link type
target_fid - fid of the link target

Potential Use Cases


Social Graph Analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('farcaster-links')
Lens
Individual Posts

Description
This dataset provides the individual posts in the Lens Protocol that are sent between
01.01.2024 and 04.04.2024. It is important to note that because the content of a post
can be any multimedia, we have decided to not put the data directly, rather have a
column that contains the URI of the content.

Schema
call_block_time- timestamp of the post creation
profileID - id of the profile which created the post
contentURI - URI that contains the content of the post

Potential Use Cases


Profanity Checker
Sentiment Analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('lens-posts')
Individual Comments

Description
This dataset provides the individual comments in the Lens Protocol that are sent
between 01.01.2024 and 04.04.2024. It is important to note that because the content of
a comment can be any multimedia, we have decided to not put the data directly, rather
have a column that contains the URI of the content.

Schema
call_block_time- timestamp of the comment creation
profileID - id of the profile which created the comment
contentURI - URI that contains the content of the comment
pointed_profileID - id of the profile which created the pointed post
pointed_pubID - id of the pointed post

Potential Use Cases


Profanity Checker
Sentiment Analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('lens-comments')
Individual Mirrors

Description
This dataset provides the individual mirrors in the Lens Protocol that are sent between
01.01.2024 and 04.04.2024.

Schema
call_block_time- timestamp of the mirror creation
profileID - id of the profile which created the mirror
pointed_profileID - id of the profile which created the pointed post
pointed_pubID - id of the pointed post

Potential Use Cases


Profanity Checker
Sentiment Analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('lens-mirrors')
Individual Quotes

Description
This dataset provides the individual quotes in the Lens Protocol that are sent between
01.01.2024 and 04.04.2024. It is important to note that because the content of a quote
can be any multimedia, we have decided to not put the data directly, rather have a
column that contains the URI of the content.

Schema
call_block_time- timestamp of the quote creation
profileID - id of the profile which created the quote
contentURI - URI that contains the content of the quote
pointed_profileID - id of the profile which created the pointed post
pointed_pubID - id of the pointed post

Potential Use Cases


Profanity Checker
Sentiment Analysis

Use example
from giza.datasets import DatasetsLoader

# Usage example:
loader = DatasetsLoader()
df = loader.load('lens-quotes')
Tools
Benchmark CLI
Benchmark CLI is a tool to run benchmarks on your Cairo projects though a single
command line.

When you run a benchmark on a Cairo program, it runs locally the entire Cairo stack:

Runner (CairoVM)
Prover (Platinum)
Verifier (Platinum)

Prerequisites
Rust
Platinum prover. As of February 2024, the tested revision is fed12d6 .
You can install the prover with the following command:

cargo install --features=cli,instruments,parallel --git https://github.com

Installation
cargo install --git https://github.com/gizatechxyz/giza-benchmark.git

Usage
giza-benchmark -p <SIERRA_FILE> -i <PROGRAM_INPUT_FILE> -b <OUTPUT_DIRECTO

Example:

giza-benchmark -p examples/xgb/xgb_inf.sierra.json -i examples/xgb/input.t


ZK-Cook
This package is designed to provide functionality that facilitates the transition from ML
algorithms to ZKML. Its two main functionalities are:

Serialization: saving a trained ML model in a specific format to be interpretable by


other programs.
model-complexity-reducer (mcr): Given a model and a training dataset, transform
the model and the data to obtain a lighter representation that maximizes the tradeoff
between performance and complexity.

It's important to note that although the main goal is the transition from ML to ZKML, mcr
can be useful in other contexts, such as:

The model's weight needs to be minimal, for example for mobile applications.
Minimal inference times are required for low latency applications.
We want to check if we have created an overly complex model and a simpler one
would give us the same performance (or even better).
The number of steps required to perform the inference must be less than X (as is
currently constrained by the ZKML paradigm).

Installation
Install from PyPi

pip install giza-zkcook

Installing from source

Clone the repository and install it with pip :


git clone [email protected]:gizatechxyz/zkcook.git
cd giza-zkcook
pip install .

Serialization
To see in more detail how this tool works, check out this tutorial.

To run it:

from giza.zkcook import serialize_model

serialize_model(YOUR_TRAINED_MODEL, "OUTPUT_PATH/MODEL_NAME.json")

This serialised model can then be sent directly to transpile using our platform. If the
model is too large, it will have to be reduced using MCR, but the structure is already
understandable by our transpiler.

MCR
To see in more detail how this tool works, check out this tutorial.

To see in more technical detail the algorithm behind this method, check out our paper.

To run it:
model, transformer = mcr(model = MY_MODEL,
X_train = X_train,
y_train = y_train,
X_eval = X_test,
y_eval = y_test,
eval_metric = 'rmse',
transform_features = True)

Supported models
Model status

XGBRegressor ✅
XGBClassifier ✅
LGBMRegressor ✅
LGBMClassifier ✅
Logistic Regression ⏳
GARCH ⏳
Tutorials
ZKML
In this collection of tutorials, you'll learn how to transpose models and make verifiable
inferences.

Verifiable XGBoost Verifiable Linear Regression

beginner tradi-ml beginner tradi-ml

Verifiable MNIST Neural Network

beginner nn
Verifiable XGBoost
In this tutorial you will learn how to use the Giza stack though a XGBoost model.

Installation
To follow this tutorial, you must first proceed with the following installation.

Handling Python versions with Pyenv

You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.

Install Python 3.11 using pyenv

pyenv install 3.11.0

Set Python 3.11 as local Python version:


pyenv local 3.11.0

Create a virtual environment using Python 3.11:


pyenv virtualenv 3.11.0 my-env

Activate the virtual environment:


pyenv activate my-env

Now, your terminal session will use Python 3.11 for this project.
Install Giza

Install Giza SDK


Install CLI, agents and zkcook using giza-sdk from PyPi

pip install giza-sdk

You'll find more options for installing Giza in the installation guide.

Install Dependencies

You must also install the following dependencies:

pip install xgboost numpy

Setup
From your terminal, create a Giza user through our CLI in order to access the Giza
Platform:

giza users create

After creating your user, log into Giza:

giza users login

Optional: you can create an API Key for your user in order to not regenerate your access
token every few hours.

giza users create-api-key


Create and Train an XGBoost Model
We'll start by creating a simple XGBoost model using Scikit-Learn and train it on diabetes
dataset.

import xgboost as xgb


from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split

data = load_diabetes()
X, y = data.data, data.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, r

# Increase the number of trees and maximum depth


n_estimators = 2 # Increase the number of trees
max_depth = 6 # Increase the maximum depth of each tree

xgb_reg = xgb.XGBRegressor(n_estimators=n_estimators, max_depth=max_depth


xgb_reg.fit(X_train, y_train)

Save the model


Save the model in Json format

from giza.zkcook import serialize_model


serialize_model(xgb_reg, "xgb_diabetes.json")

Transpile your model to Orion Cairo


For more detailed information on transpilation, please consult the Transpiler resource.

We will use Giza-CLI to transpile our saved model to Orion Cairo.


! giza transpile xgb_diabetes.json --output-path xgb_diabetes

>>>>
[giza][2024-05-10 17:14:48.565] No model id provided, checking if model ex
[giza][2024-05-10 17:14:48.567] Model name is: xgb_diabetes
[giza][2024-05-10 17:14:49.081] Model already exists, using existing model
[giza][2024-05-10 17:14:49.083] Model found with id -> 588! ✅
[giza][2024-05-10 17:14:49.777] Version Created with id -> 2! ✅
[giza][2024-05-10 17:14:49.780] Sending model for transpilation ✅
[giza][2024-05-10 17:15:00.670] Transpilation is fully compatible. Version
⠙ Transpiling Model...
[giza][2024-05-10 17:15:01.337] Downloading model ✅
[giza][2024-05-10 17:15:01.339] model saved at: xgb_diabetes

Deploy an inference endpoint


For more detailed information on inference endpoint, please consult the Endpoint resource.

Now that our model is transpiled to Cairo we can deploy an endpoint to run verifiable
inferences. We will use Giza CLI again to run and deploy an endpoint. Ensure to replace
model-id and version-id with your ids provided during transpilation.

! giza endpoints deploy --model-id 588 --version-id 2

>>>>
▰▰▰▰▰▰▰ Creating endpoint!t!
[giza][2024-05-10 17:15:21.628] Endpoint is successful ✅
[giza][2024-05-10 17:15:21.635] Endpoint created with id -> 190 ✅
[giza][2024-05-10 17:15:21.636] Endpoint created with endpoint URL: https

Run a verifiable inference


To streamline verifiable inference, you might consider using the endpoint URL obtained
after transpilation. However, this approach requires manual serialization of the input for
the Cairo program and handling the deserialization process. To make this process more
user-friendly and keep you within a Python environment, we've introduced a Python SDK
designed to facilitate the creation of ML workflows and execution of verifiable
predictions. When you initiate a prediction, our system automatically retrieves the
endpoint URL you deployed earlier, converts your input into Cairo-compatible format,
executes the prediction, and then converts the output back into a numpy object.

import xgboost as xgb


from sklearn.datasets import load_diabetes
from sklearn.model_selection import train_test_split

from giza.agents.model import GizaModel

MODEL_ID = 588 # Update with your model ID


VERSION_ID = 2 # Update with your version ID

def prediction(input, model_id, version_id):


model = GizaModel(id=model_id, version=version_id)

(result, proof_id) = model.predict(


input_feed={"input": input}, verifiable=True, model_category="XGB
)

return result, proof_id

def execution():
# The input data type should match the model's expected input
input = X_test[1, :]

(result, proof_id) = prediction(input, MODEL_ID, VERSION_ID)

print(f"Predicted value for input {input.flatten()[0]} is {result}")

return result, proof_id

if __name__ == "__main__":
data = load_diabetes()
X, y = data.data, data.target

X_train, X_test, y_train, y_test = train_test_split(


X, y, test_size=0.2, random_state=42
)
_, proof_id = execution()
print(f"Proof ID: {proof_id}")
🚀 Starting deserialization process...
✅ Deserialization completed! 🎉
(175.58781, '546f8817fa454db78982463868440e8c')

If your problem is a binary classification problem, you will need to post-process the
result obtained after executing the predict method. The code you need to execute to get
the probability of class 1 (same probability returned by XGBClassifier.predict_proba()) is in
the following code snippet

import json
import math

def logit(x):
return math.log(x / (1 - x))

def post_process_binary_pred(model_json_path, result):


"""
Returns the probability of the positive class given a result from Giza

Parameters:
model_json_path (str): Path to the trained model in JSON format.
result (float): Result from GizaModel.predict().

Returns:
float: Probability of the positive class.
"""
with open(model_json_path, 'r') as f:
xg_json = json.load(f)

base_score = float(xg_json['learner']['learner_model_param']['base_sco

if base_score != 0:
result = result + logit(base_score)
final_score = 1 / (1 + math.exp(-result))

return final_score

# Usage example
model_path = 'PATH_TO_YOUR_MODEL.json' # Path to your model JSON file
predict_result = 3.45 # Example result from GizaModel.predict()
probability = post_process_binary_pred(model_path, predict_result)
Download the proof
For more detailed information on proving, please consult the Prove resource.

Initiating a verifiable inference sets off a proving job on our server, sparing you the
complexities of installing and configuring the prover yourself. Upon completion, you can
download your proof.

First, let's check the status of the proving job to ensure that it has been completed.

Remember to substitute endpoint-id and proof-id with the specific IDs assigned to
you throughout this tutorial.

$ giza endpoints get-proof --endpoint-id 190 --proof-id "546f8817fa454db78

>>>
[giza][2024-03-19 11:51:45.470] Getting proof from endpoint 190 ✅
{
"id": 664,
"job_id": 831,
"metrics": {
"proving_time": 15.083126
},
"created_date": "2024-03-19T10:41:11.120310"
}

Once the proof is ready, you can download it.

$ giza endpoints download-proof --endpoint-id 190 --proof-id "546f8817fa45

>>>>
[giza][2024-03-19 11:55:49.713] Getting proof from endpoint 190 ✅
[giza][2024-03-19 11:55:50.493] Proof downloaded to zk_xgboost.proof ✅

Better to surround the proof-id in double quotes (") when using the alphanumerical id
Verify the proof
Finally, you can verify the proof.

$ giza verify --proof-id 664

>>>>
[giza][2024-05-21 10:08:59.315] Verifying proof...
[giza][2024-05-21 10:09:00.268] Verification result: True
[giza][2024-05-21 10:09:00.270] Verification time: 0.437505093
Verifiable Linear Regression
In this tutorial you will learn how to use the Giza stack though a Linear Regression model.

Installation
To follow this tutorial, you must first proceed with the following installation.

Handling Python versions with Pyenv

You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.

Install Python 3.11 using pyenv

pyenv install 3.11.0

Set Python 3.11 as local Python version:


pyenv local 3.11.0

Create a virtual environment using Python 3.11:


pyenv virtualenv 3.11.0 my-env

Activate the virtual environment:


pyenv activate my-env

Now, your terminal session will use Python 3.11 for this project.
Install Giza

Install Giza CLI


Install the CLI from PyPi

pipx install giza-cli

Install Agent SDK


Install the Agents package from from PyPi

pip install giza-agents

You'll find more options for installing Giza in the installation guide.

Install Dependencies

You must also install the following dependencies:

pip install scikit-learn skl2onnx numpy

Setup
From your terminal, create a Giza user through our CLI in order to access the Giza
Platform:

giza users create

After creating your user, log into Giza:

giza users login


Optional: you can create an API Key for your user in order to not regenerate your access
token every few hours.

giza users create-api-key

Create and Train a Linear Regression Model


We'll start by creating a simple linear regression model using Scikit-Learn and train it
with some dummy data.

import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split

# Generate some dummy data


X = np.random.rand(100, 1) * 10 # 100 samples, 1 feature
y = 2 * X + 1 + np.random.randn(100, 1) * 2 # y = 2x + 1 + noise

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, r

# Create a linear regression model


model = LinearRegression()

# Train the model


model.fit(X_train, y_train)

Convert the Model to ONNX Format


Giza supports ONNX models so you'll need to convert the model to ONNX format. After
the model is trained, you can convert it to ONNX format using the skl2onnx library.
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType

# Define the initial types for the ONNX model


initial_type = [('float_input', FloatTensorType([None, X_train.shape[1]])

# Convert the scikit-learn model to ONNX


onnx_model = convert_sklearn(model, initial_types=initial_type)

# Save the ONNX model to a file


with open("linear_regression.onnx", "wb") as f:
f.write(onnx_model.SerializeToString())

Transpile your model to Orion Cairo


For more detailed information on transpilation, please consult the Transpiler resource.

We will use Giza-CLI to transpile our ONNX model to Orion Cairo.

$ giza transpile linear_regression.onnx --output-path verifiable_lr


>>>>
[giza][2024-03-19 10:43:11.351] No model id provided, checking if model ex
[giza][2024-03-19 10:43:11.354] Model name is: linear_regression
[giza][2024-03-19 10:43:11.586] Model Created with id -> 447! ✅
[giza][2024-03-19 10:43:12.093] Version Created with id -> 1! ✅
[giza][2024-03-19 10:43:12.094] Sending model for transpilation ✅
[giza][2024-03-19 10:43:43.185] Transpilation is fully compatible. Version
⠧ Transpiling Model...
[giza][2024-03-19 10:43:43.723] Downloading model ✅
[giza][2024-03-19 10:43:43.731] model saved at: verifiable_lr

Deploy an inference endpoint


For more detailed information on inference endpoint, please consult the Endpoint resource.
Now that our model is transpiled to Cairo we can deploy an endpoint to run verifiable
inferences. We will use Giza CLI again to deploy an endpoint. Ensure to replace
model-id and version-id with your ids provided during transpilation.

$ giza endpoints deploy --model-id 447 --version-id 1

▰▱▱▱▱▱▱ Creating endpoint!


[giza][2024-03-19 10:51:48.551] Endpoint is successful ✅
[giza][2024-03-19 10:51:48.557] Endpoint created with id -> 109 ✅
[giza][2024-03-19 10:51:48.558] Endpoint created with endpoint URL: https

Run a verifiable inference


To streamline verifiable inference, you might consider using the endpoint URL obtained
after transpilation. However, this approach requires manual serialization of the input for
the Cairo program and handling the deserialization process. To make this process more
user-friendly and keep you within a Python environment, we've introduced a Python SDK
designed to facilitate the creation of ML workflows and execution of verifiable
predictions. When you initiate a prediction, our system automatically retrieves the
endpoint URL you deployed earlier, converts your input into Cairo-compatible format,
executes the prediction, and then converts the output back into a numpy object.
from giza.agents.model import GizaModel

MODEL_ID = 447 # Update with your model ID


VERSION_ID = 1 # Update with your version ID

def prediction(input, model_id, version_id):


model = GizaModel(id=model_id, version=version_id)

(result, proof_id) = model.predict(


input_feed={'input': input}, verifiable=True
)

return result, proof_id

def execution():
# The input data type should match the model's expected input
input = np.array([[5.5]]).astype(np.float32)

(result, proof_id) = prediction(input, MODEL_ID, VERSION_ID)

print(
f"Predicted value for input {input.flatten()[0]} is {result[0].fla

return result, proof_id

execution()

11:34:04.423 | INFO | Created flow run 'proud-perch' for flow 'Exectute


11:34:04.424 | INFO | Action run 'proud-perch' - View at https://action
11:34:04.746 | INFO | Action run 'proud-perch' - Created task run 'Pred
11:34:04.748 | INFO | Action run 'proud-perch' - Executing 'PredictLRMo
🚀 Starting deserialization process...
✅ Deserialization completed! 🎉
11:34:08.194 | INFO | Task run 'PredictLRModel-0' - Finished in state C
11:34:08.197 | INFO | Action run 'proud-perch' - Predicted value for in
11:34:08.313 | INFO | Action run 'proud-perch' - Finished in state Comp
(array([[12.20851135]]), '"3a15bca06d1f4788b36c1c54fa71ba07"')

Download the proof


For more detailed information on proving, please consult the Prove resource.

Initiating a verifiable inference sets off a proving job on our server, sparing you the
complexities of installing and configuring the prover yourself. Upon completion, you can
download your proof.

First, let's check the status of the proving job to ensure that it has been completed.

Remember to substitute endpoint-id and proof-id with the specific IDs assigned to
you throughout this tutorial.

$ giza endpoints get-proof --endpoint-id 109 --proof-id "3a15bca06d1f4788b

>>>
[giza][2024-03-19 11:51:45.470] Getting proof from endpoint 109 ✅
{
"id": 664,
"job_id": 831,
"metrics": {
"proving_time": 15.083126
},
"created_date": "2024-03-19T10:41:11.120310"
}

Once the proof is ready, you can download it.

$ giza endpoints download-proof --endpoint-id 109 --proof-id "3a15bca06d1f

>>>>
[giza][2024-03-19 11:55:49.713] Getting proof from endpoint 109 ✅
[giza][2024-03-19 11:55:50.493] Proof downloaded to zklr.proof ✅

Better to surround the proof-id in double quotes (") when using the alphanumerical id

Verify the proof


Finally you can verify the proof.

$ giza verify --proof-id 664

>>>>
[giza][2024-05-21 10:08:59.315] Verifying proof...
[giza][2024-05-21 10:09:00.268] Verification result: True
[giza][2024-05-21 10:09:00.270] Verification time: 0.437505093
Verifiable MNIST Neural Network
Giza provides developers with the tools to easily create and expand Verifiable Machine
Learning solutions, transforming their Python scripts and ML models into robust,
repeatable workflows.

In this tutorial, we will explore the process of building your first Neural Network using
MNIST dataset, Pytorch, and Giza SDK and demonstrating its verifiability.

What is MNIST dataset?


The MNIST dataset is an extensive collection of handwritten digits, very popular in the
field of image processing. Often, it's used as a reference point for machine learning
algorithms. This dataset conveniently comes already partitioned into training and testing
sets, a feature we'll delve into later in this tutorial.

The MNIST database comprises a collection of 70,000 images of handwritten digits,


ranging from 0 to 9. Each image measures 28 x 28 pixels. For the purpose of this tutorial,
we will resize image to 14 x 14 pixels.
Installation
To follow this tutorial, you must first proceed with the following installation.

Handling Python versions with Pyenv

You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.

Install Python 3.11 using pyenv

pyenv install 3.11.0

Set Python 3.11 as local Python version:


pyenv local 3.11.0

Create a virtual environment using Python 3.11:


pyenv virtualenv 3.11.0 my-env

Activate the virtual environment:


pyenv activate my-env

Now, your terminal session will use Python 3.11 for this project.

Install Giza
Install Giza CLI
Install the CLI from PyPi:

pip install giza-cli

Install Agent SDK


Install the Agents package from PyPi

pip install giza-agents

You'll find more options for installing Giza in the installation guide.

Install Dependencies

You must also install the following dependencies:

pip install torch scipy numpy

Setup
From your terminal, create a Giza user through our CLI in order to access the Giza
Platform:

giza users create

After creating your user, log into Giza:

giza users login

Optional: you can create an API Key for your user in order to not regenerate your access
token every few hours.
giza users create-api-key

Define and Train a Model

Step 1: Set Up the Environment


First, import the necessary libraries and configure the environment settings.

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import numpy as np
import logging
from scipy.ndimage import zoom
from torch.utils.data import DataLoader, TensorDataset

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

Step 2: Define Model Parameters


Specify the parameters that will define the neural network's structure.

input_size = 196 # 14x14


hidden_size = 10
num_classes = 10
num_epochs = 10
batch_size = 256
learning_rate = 0.001

Step 3: Create the Neural Network Model


Define the neural network architecture.
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(NeuralNet, self).__init__()
self.input_size = input_size
self.l1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.l2 = nn.Linear(hidden_size, num_classes)

def forward(self, x):


out = self.l1(x)
out = self.relu(out)
out = self.l2(out)
return out

Step 4: Prepare Datasets


Write functions to resize the images and prepare training and testing datasets.

def resize_images(images):
return np.array([zoom(image[0], (0.5, 0.5)) for image in images])

def prepare_datasets():
print("Prepare dataset...")
train_dataset = torchvision.datasets.MNIST(root='./data', train=True,
test_dataset = torchvision.datasets.MNIST(root='./data', train=False)

x_train = resize_images(train_dataset)
x_test = resize_images(test_dataset)

x_train = torch.tensor(x_train.reshape(-1, 14*14).astype('float32') /


y_train = torch.tensor([label for _, label in train_dataset], dtype=to

x_test = torch.tensor(x_test.reshape(-1, 14*14).astype('float32') / 25


y_test = torch.tensor([label for _, label in test_dataset], dtype=torc

print(" ✅ Datasets prepared successfully")

return x_train, y_train, x_test, y_test

Step 5: Create Data Loaders


Create data loaders to manage batches of the datasets.

def create_data_loaders(x_train, y_train, x_test, y_test):


print("Create loaders...")

train_loader = DataLoader(TensorDataset(x_train, y_train), batch_size=


test_loader = DataLoader(TensorDataset(x_test, y_test), batch_size=bat

print(" ✅ Loaders created!")

return train_loader, test_loader

Step 6: Train the Model


Develop a function to train the model using the training loader.

def train_model(train_loader):
print("Train model...")

model = NeuralNet(input_size, hidden_size, num_classes).to(device)


criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)

for epoch in range(num_epochs):


for i, (images, labels) in enumerate(train_loader):
images = images.to(device).reshape(-1, 14*14)
labels = labels.to(device)

outputs = model(images)
loss = criterion(outputs, labels)

optimizer.zero_grad()
loss.backward()
optimizer.step()

if (i + 1) % 100 == 0:
print(f'Epoch [{epoch + 1}/{num_epochs}], Step [{i + 1}/{l

print(" ✅Model trained successfully")


return model
Step 7: Test the Model
Define a function to evaluate the model's performance on the test set.

def test_model(model, test_loader):


print("Test model...")
with torch.no_grad():
n_correct = 0
n_samples = 0
for images, labels in test_loader:
images = images.to(device).reshape(-1, 14*14)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
n_samples += labels.size(0)
n_correct += (predicted == labels).sum().item()

acc = 100.0 * n_correct / n_samples


print(f'Accuracy of the network on the 10000 test images: {acc} %

Step 8: Execute the Tasks


Create a function to execute all the previous steps in sequence.

def execution():
# Prepare training and testing datasets
x_train, y_train, x_test, y_test = prepare_datasets()

train_loader, test_loader = create_data_loaders(x_train, y_train, x_te

model = train_model(train_loader)

test_model(model, test_loader)

return model

model = execution()

Convert the model to ONNX


Before transpiling the model we've just trained to ZK circuits, we need to convert the
model to the ONNX framework. You can consult the list of frameworks supported by the
Transpiler, here.

ONNX, short for Open Neural Network Exchange, is an open format for representing and
exchanging machine learning models between different frameworks and libraries. It
serves as an intermediary format that allows you to move models seamlessly between
various platforms and tools, facilitating interoperability and flexibility in the machine
learning ecosystem.

import torch.onnx

def convert_to_onnx(model, onnx_file_path):


dummy_input = torch.randn(1, input_size).to(device)
torch.onnx.export(model, dummy_input, onnx_file_path,
export_params=True, opset_version=10, do_constant_fo

print(f"Model has been converted to ONNX and saved as {onnx_file_path}

onnx_file_path = "mnist_model.onnx"
convert_to_onnx(model, onnx_file_path)

Transpile your Model to Orion Cairo


For more detailed information on transpilation, please consult the Transpiler resource.

Now that your model is converted to ONNX format, use the Giza-CLI to transpile it to
Orion Cairo code.
> giza transpile mnist_model.onnx --output-path verifiable_mnist
>>>
[giza][2024-02-07 16:31:20.844] No model id provided, checking if model ex
[giza][2024-02-07 16:31:20.845] Model name is: mnist_model
[giza][2024-02-07 16:31:21.599] Model Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.436] Version Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.437] Sending model for transpilation ✅
[giza][2024-02-07 16:32:13.511] Transpilation is fully compatible. Version
[giza][2024-02-07 16:32:13.516] Transpilation recieved! ✅
[giza][2024-02-07 16:32:14.349] Transpilation saved at: verifiable_mnist

Thanks to full support for all operators used by MNIST model in the transpiler, your
transpilation process is completely compatible. This ensures that your project compiles
smoothly and has already been compiled behind the scenes on our platform.

If your model incorporates operators that aren't supported by the transpiler, you may need
to refine your Cairo project to ensure successful compilation. For more details, refer to the to
Transpiler resource.

Deploy an Inference Endpoint


For more detailed information on inference endpoint, please consult the Endpoint resource.

With your model successfully transposed, it's now ready for deployment of an inference
endpoint. Our deployment process sets up services that handle prediction requests via a
designated endpoint, using Cairo to ensure inference provability.

Deploy your service, which will be ready to accept prediction requests at the
/cairo_run endpoint, by using the following command:

giza endpoints deploy --model-id 1 --version-id 1


▰▰▰▰▰▱▱ Creating endpoint!
[giza][2024-02-07 12:31:02.498] Endpoint is successful ✅
[giza][2024-02-07 12:31:02.501] Endpoint created with id -> 1 ✅
[giza][2024-02-07 12:31:02.502] Endpoint created with endpoint URL: https
Run a Verifiable Inference
Now that your Cairo model is deployed on the Giza platform, you have the capability to
execute it.

When you initiate a prediction using Giza it executes the Cairo program using CairoVM,
generating trace and memory files for the proving. It also returns the output value and
initiates a proving job to generate a Stark proof of the inference.

Update the IDs in the following code with your own.


from giza.agents.model import GizaModel

def preprocess_image(image_path):
from PIL import Image
import numpy as np

# Load image, convert to grayscale, resize and normalize


image = Image.open(image_path).convert('L')
# Resize to match the input size of the model
image = image.resize((14, 14))
image = np.array(image).astype('float32') / 255
image = image.reshape(1, 196) # Reshape to (1, 196) for model input
return image

MODEL_ID = 1 # Update with your model ID


VERSION_ID = 1 # Update with your version ID

def prediction(image, model_id, version_id):


model = GizaModel(id=model_id, version=version_id)

(result, request_id) = model.predict(


input_feed={"image": image}, verifiable=True
)

# Convert result to a PyTorch tensor


result_tensor = torch.tensor(result)
# Apply softmax to convert to probabilities
probabilities = F.softmax(result_tensor, dim=1)
# Use argmax to get the predicted class
predicted_class = torch.argmax(probabilities, dim=1)

return predicted_class.item(), request_id

def execution():
image = preprocess_image("./imgs/zero.png")
(result, request_id) = prediction(image, MODEL_ID, VERSION_ID)
print("Result: ", result)
print("Request id: ", request_id)

return result, request_id

execution()
🚀 Starting deserialization process...
✅ Deserialization completed! 🎉
(0, '"3a15bca06d1f4788b36c1c54fa71ba07"')

Download the proof


For more detailed information on proving, please consult the Prove resource.

Executing a verifiable inference sets off a proving job on our server, sparing you the
complexities of installing and configuring the prover yourself. Upon completion, you can
download your proof.

First, let's check the status of the proving job to ensure that it has been completed.

Remember to substitute endpoint-id and proof-id with the specific IDs assigned to
you throughout this tutorial.

giza endpoints get-proof --endpoint-id 1 --proof-id "3a15bca06d1f4788b36c1

>>>
[giza][2024-03-19 11:51:45.470] Getting proof from endpoint 109 ✅
{
"id": 664,
"job_id": 831,
"metrics": {
"proving_time": 15.083126
},
"created_date": "2024-03-19T10:41:11.120310"
}

Once the proof is ready, you can download it.


$ giza endpoints download-proof --endpoint-id 1 --proof-id "3a15bca06d1f47

>>>>
[giza][2024-03-19 11:55:49.713] Getting proof from endpoint 1 ✅
[giza][2024-03-19 11:55:50.493] Proof downloaded to zk_mnist.proof ✅
Voilà 🎉🎉
You've learned how to use the entire Giza stack, from training your model to transpiling it
to Cairo for verifiable execution. We hope you've enjoyed this journey!

Better to surround the proof-id in double quotes (") when using the alphanumerical id

Verify the proof


Finally you can verify the proof.

$ giza verify --proof-id 664

>>>>
[giza][2024-05-21 10:08:59.315] Verifying proof...
[giza][2024-05-21 10:09:00.268] Verification result: True
[giza][2024-05-21 10:09:00.270] Verification time: 0.437505093
AI Agents
In this collection of tutorials, you'll learn how to use AI Agents to make your smart
contracts smarter.

Create an AI Agent to Mint an MNIST NFT Using Arbitrum with AI Agents

beginner agent intermediate agent

Uniswap V3 LP Rebalancing with AI Agent

advanced agent
Pendle Trading Agent

1. Pendle Protocol Primer


Pendle Documentation

Pendle Protocol is a permissionless yield trading protocol that use tokenization to


separate yield from the underlying token. To briefly summarize, any yield bearing token
(in our project, we use weETH) can be wrapped into its standardized SY Pendle variant
(SY-weETH). This SY token, which represents both the underlying non-yield token (ETH)
as well as its yield bearing portion, can then be separated and traded with two tokenized
components, which are called Principal Token (PT-weETH) and Yield Token (YT-weETH).
Through its PT-SY AMM's, Pendle Protocol allows users to trade between the potential
yield and the underlying value of the token and consequently create yield markets for
many popular yield bearing tokens.
This agent focuses on the weETH, the wrapped version of the ether.fi token, which is
traded between its SY and PT variants. To execute informed and profitable trades, agent
leverages a yield prediction ZKML model for eETH, verifies its output using the its ZKML
proof and ultimately compares it with the PT-SY price as well as the fixed PT-yield to
automatically trade between PT-weETH and SY-weETH tokens.

2. Installing Dependencies and Project Setup


Python 3.11 or later must be installed, we recommend using a virtual environment.

This project uses poetry as the dependency manager, to install the required
dependencies simply execute:

poetry install

An active Giza account is required to deploy the model and use agents. If you don't
have one, you can create one here.
You also need an ape account to use Giza Agents, you can read how to create an Ape
account, as well as the basics of the Ape Framework here
To run a forked Ethereum network locally, you need to install Foundry.
Finally, we will need some environment variables. Create a .env file in the directory of
this project and populate it one variable:

DEV_PASSPHRASE="<YOUR-APE-ACCOUNT-PASSWORD>"

3. Yield Prediction Model


This project uses a relatively simple, 3 layered neural network model to predict the yield
(APR) of the eETH token after 7 days. You can use the yield_prediction_model.ipynb
notebook to follow the implementation of the model step-by-step. In the end of the
notebook, the developed model is exported in .onnx format, which will then be used
within the Giza CLI to create the ZKML endpoint.
4. Model transpilation and Agent creation
Take a look at the yield_prediction_actions_notebook.ipynb and follow the steps to learn
how to create an agent (as well as learn the fundamentals of Giza CLI step-by-step)

Checklist:

Create a Giza User


Transpile the model using Orion
Create a Giza Workspace
Deploy your ZKML model into your workspace
Create an Endpoint using the ZKML model
Create a Giza Agent that listens to that Endpoint

5. Creating a local fork of Ethereum Mainnet,


Agent setup
We are assuming here that you have already installed Foundry

Open up a new terminal, and type in the following command

anvil --fork-url <RPC_URL> --fork-block-number 19754466 --fork-chain-id 1

This creates a local Ethereum mainnet network (chain-id 1) from the 19754466'th block,
which we will use to run our agent on. Working on a local fork provides various
advantages, such as being able to experiment with any smart contract and protocol that
is on the target network.

To be able to use the agent, we require some tokens to begin with.

Before running the setup.py, make sure to edit the marked lines to include your Ape
password and your Ape username
python agent/setup.py

This command will login to your Ape wallet, and mint you the required amount of weETH
to be able to run the agent.

6. Pendle Agent
Before running the agent, lets look at some code snippets to understand what the code is
actually doing.

if __name__ == "__main__":
# Create the parser
parser = argparse.ArgumentParser()

# Add arguments
parser.add_argument("--agent-id", metavar="A", type=int, help="model-
parser.add_argument("--weETH-amount", metavar="W", type=float, help="w
parser.add_argument("--fixed-yield", metavar="Y", type=float, help="f
parser.add_argument("--expiration-days", metavar="E", type=int, help=

# Parse arguments
args = parser.parse_args()

agent_id = args.agent_id
weETH_amount = args.weETH_amount
fixed_yield = args.fixed_yield
expiration_days = args.expiration_days

SY_PY_swap(weETH_amount,agent_id,fixed_yield, expiration_days)

To run the agent, we need to type as arguments the id of the agent (in our case : 5), the
weETH-amount that we locate to the wallet (lets give 5 weETH), the fixed yield and the
expiration date of the pool (these two can be parsed from the pools in a later iteration).
The main function simply parses the arguments, and runs the main function of the agent,
SY_PY_swap().
contracts = {
"router": router,
"routerProxy": routerProxy,
"SY_weETH_Market": SY_weETH_Market,
"weETH": weETH,
"PT_weETH": PT_weETH,
}
agent = create_agent(
agent_id=agent_id,
chain=chain,
contracts=contracts,
account_alias=account,
)

We are putting all the contracts that our agents will interact in a dictionary, and using
that in agent creation. Giza Agents automatically creates the Contracts() objects you
might be familiar from the Ape Framework.

with agent.execute() as contracts:


logger.info("Verification complete, executing contract")

decimals = contracts.weETH.decimals()
weETH_amount = weETH_amount * 10**decimals

state = contracts.SY_weETH_Market.readState(contracts.router.addre

PT_price = calculate_price(state.lastLnImpliedRate, decimals)


logger.info(f"Calculated Price: {PT_price}")

traded_SY_amount, PT_weight = swap_logic(weETH_amount, PT_price, f

logger.info(f"The amount of SY to be traded: {traded_SY_amount}, P

contracts.weETH.approve(contracts.routerProxy.address, traded_SY_a

contracts.routerProxy.swapExactTokenForPt(wallet_address, contract
,input_tuple(contracts.weETH, traded_SY

PT_balance = contracts.PT_weETH.balanceOf(wallet_address)
weETH_balance = contracts.weETH.balanceOf(wallet_address)

logger.info(f"Swap succesfull! Currently, you own: {PT_balance} PT


The lines starting from agent.execute() represents the onchain interactions that take
place. As you can see, we easily access the functions of the contracts we have given as
input to the agent by using their dictionary keys. We calculate the price of the PT_SY
swap, and then calculate how much we want to sell SY to buy PT with the swap_logic()
function. We approve the tokens exchange, and then swap the calculated amounts of SY
tokens for the PT tokens.

7. Running the Pendle Agent


To execute the agent, run the following command

python agent/agent.py --agent-id 4 --weETH-amount 5 --fixed-yield 1.2 --ex

You can change the variables to see how it affects the trade at the end.
15:53:46.145 | INFO | Created flow run 'imperial-sloth' for flow 'SY-PY
15:53:46.145 | INFO | Action run 'imperial-sloth' - View at https://act
15:53:46.765 | INFO | Action run 'imperial-sloth' - Created task run 'C
15:53:46.765 | INFO | Action run 'imperial-sloth' - Executing 'Create a
15:53:48.583 | INFO | Task run 'Create a Giza agent using Agent_ID-0'
15:53:48.673 | INFO | Action run 'imperial-sloth' - Created task run 'R
15:53:48.673 | INFO | Action run 'imperial-sloth' - Executing 'Run the
🚀 Starting deserialization process...
✅ Deserialization completed! 🎉
15:53:53.102 | INFO | Task run 'Run the yield prediction model-0' - Fin
15:53:53.104 | INFO | Action run 'imperial-sloth' - Result: AgentResult
0.02796276, 1.01916988]])}, request_id=4e8cc39af7a443a0af66b0426cb
INFO: Connecting to existing Erigon node at https://ethereum-rpc.publicnod
15:53:53.668 | INFO | Connecting to existing Erigon node at https://ethere
WARNING: Danger! This account will now sign any transaction it's given.
15:53:54.638 | WARNING | Danger! This account will now sign any transactio
15:53:54.649 | INFO | Action run 'imperial-sloth' - Verification comple
15:53:56.089 | INFO | Action run 'imperial-sloth' - Calculated Price: 1
15:56:41.745 | INFO | Action run 'imperial-sloth' - The amount of SY to
WARNING: Using cached key for pendle-agent
15:56:41.765 | WARNING | Using cached key for pendle-agent
INFO: Confirmed 0xfaa27a6370eacd3123baa95fb5f9597a8385897be935e818d719dbc8
15:56:42.123 | INFO | Confirmed 0xfaa27a6370eacd3123baa95fb5f9597a8385897b
INFO: Confirmed 0xfaa27a6370eacd3123baa95fb5f9597a8385897be935e818d719dbc8
15:56:42.130 | INFO | Confirmed 0xfaa27a6370eacd3123baa95fb5f9597a8385897b
WARNING: Using cached key for pendle-agent
15:56:42.457 | WARNING | Using cached key for pendle-agent
INFO: Confirmed 0xb37d84466093365255b4dd3e6a90bbb4b0fd5f38a5b8ee19449f385f
15:56:46.165 | INFO | Confirmed 0xb37d84466093365255b4dd3e6a90bbb4b0fd5f38
INFO: Confirmed 0xb37d84466093365255b4dd3e6a90bbb4b0fd5f38a5b8ee19449f385f
15:56:46.174 | INFO | Confirmed 0xb37d84466093365255b4dd3e6a90bbb4b0fd5f38
15:56:46.186 | INFO | Action run 'imperial-sloth' - Swap succesfull! Cu
15:56:46.398 | INFO | Action run 'imperial-sloth' - Finished in state C

Congrats, you have just used an Giza Agent to provide trade on Pendle Protocol using a
ZKML yield prediction model.
Create an AI Agent to
Mint an MNIST NFT
Agents are entities designed to assist users in interacting with Smart Contracts by
managing the proof verification of verifiable ML models and executing these contracts
using Ape's framework.

Agents serve as intermediaries between users and Smart Contracts, facilitating seamless
interaction with verifiable ML models and executing associated contracts. They handle
the verification of proofs, ensuring the integrity and authenticity of data used in contract
execution.

In this tutorial, we will create an AI Agent to mint an MNIST NFT. Using an existing MNIST
endpoint, we will perfom a prediction on the MNIST dataset and mint an NFT based on
the prediction.

This tutorial can be thought as the continuation of the previous tutorial Verifiable MNIST
Neural Network.

Installation
To follow this tutorial, you must first proceed with the following installation.

Handling Python versions with Pyenv

You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.

Install Python 3.11 using pyenv

pyenv install 3.11.0


Set Python 3.11 as local Python version:
pyenv local 3.11.0

Create a virtual environment using Python 3.11:


pyenv virtualenv 3.11.0 my-env

Activate the virtual environment:


pyenv activate my-env

Now, your terminal session will use Python 3.11 for this project.

Install Giza

Install Giza CLI


Install the CLI from PyPi

pip install giza-cli

Install Agent SDK


Install the Agents package from PyPi

pip install giza-agents

You'll find more options for installing Giza in the installation guide.

Install Dependencies

You must also install the following dependencies:


pip install torch scipy numpy

Setup
From your terminal, create a Giza user through our CLI in order to access the Giza
Platform:

giza users create

After creating your user, log into Giza:

giza users login

Create an API Key for your user in order to not regenerate your access token every few
hours.

giza users create-api-key

Before you begin


We assume that you already have an MNIST inference endpoint deployed on Giza. If not,
please follow the Verifiable MNIST Neural Network tutorial to deploy an MNIST endpoint.

Creating an Ape Account (Wallet)


Before we can create an AI Agent, we need to create an account using Ape's framework.
We can do this by running the following command:
$ ape accounts generate <account name>
Enhance the security of your account by adding additional random input:
Show mnemonic? [Y/n]: n
Create Passphrase to encrypt account:
Repeat for confirmation:
SUCCESS: A new account '0x766867bB2E3E1A6E6245F4930b47E9aF54cEba0C' with
HDPath m/44'/60'/0'/0/0 has been added with the id '<account name>'

This will create a new account under $HOME/.ape/accounts using the keyfile structure
from the eth-keyfile library , for more information on the account management, you can
refer to the Ape's framework documentation.

In this account generation we will be prompted to enter a passphrase to encrypt the


account, this passphrase will be used to unlock the account when needed, so make sure
to keep it safe.

We encourage the creation of a new account for each agent, as it will allow you to
manage the agent's permissions and access control more effectively, but importing
accounts is also possible.

Funding the account


Before we can create an AI Agent, we need to fund the account with some ETH. You can
do this by sending some ETH to the account address generated in the previous step.

In this case we will use Sepolia testnet, you can get some testnet ETH from a faucet like
Alchemy Sepolia Faucet or LearnWeb3 Faucet. This faucets will ask for security
meassures to make sure that you are not a bot like having a specific amount of ETH in
mainnet or a Github account. There are many faucets available, you can choose the one
that suits you the best.

Once, we can recieve the testnet ETH and we have a funded account, we can proceed to
create an AI Agent.

Creating an AI Agent
Now that we have a funded account, we can create an AI Agent. We can do this by running
the following command:

giza agents create --model-id <model-id> --version-id <version-id> --


name <agent name> --description <agent description>

# or if you have the endpoint-id

giza agents create --endpoint-id <endpoint-id> --name <agent name> --


description <agent description>

This command will prompt you to choose the account you want to use with the agent,
once you select the account, the agent will be created and you will receive the agent id.
The output will look like this:

[giza][2024-04-10 11:50:24.005] Creating agent ✅


[giza][2024-04-10 11:50:24.006] Using endpoint id to create agent,
retrieving model id and version id
[giza][2024-04-10 11:50:53.480] Select an existing account to create the
agent.
[giza][2024-04-10 11:50:53.480] Available accounts are:
┏━━━━━━━━━━━━━┓
┃ Accounts ┃
┡━━━━━━━━━━━━━┩
│ my_account │
└─────────────┘
Enter the account name: my_account
{
"id": 1,
"name": <agent_name>,
"description": <agent_description>,
"parameters": {
"model_id": <model_id>,
"version_id": <version_id>,
"endpoint_id": <endpoint_id>,
"alias": "my_account"
},
"created_date": "2024-04-10T09:51:04.226448",
"last_update": "2024-04-10T09:51:04.226448"
}

This will create an AI Agent that can be used to interact with the deployed MNIST model.
How to use the AI Agent
Now that the agent is created we can start using it through the Agent SDK. As we will be
using the agent to mint a MNIST NFT, we will need to provide the MNIST image to the
agent and preprocess it before sending it to the model.

import pprint
import numpy as np
from PIL import Image

from giza.agents import GizaAgent, AgentResult

# Make sure to fill these in


MODEL_ID = <model-id>
VERSION_ID = <version-id>
# As we are executing in sepolia, we need to specify the chain
CHAIN = "ethereum:sepolia:geth"
# The address of the deployed contract
MNIST_CONTRACT = "0x7FE10f158b57CF9e48Af672EC7A43D0c4952da17"

Now we will start creating the functions that are necessary to create a program that will:

Load and preprocess the MNIST image


Create an instance of an agent
Predict the MNIST image using the agent
Access the prediction result
Mint an NFT based on the prediction result

To load and preprocess the image we can use the function that we already created in the
previous tutorial Verifiable MNIST Neural Network.
# This function is for the previous MNIST tutorial
def preprocess_image(image_path: str):
"""
Preprocess an image for the MNIST model.

Args:
image_path (str): Path to the image file.

Returns:
np.ndarray: Preprocessed image.
"""

# Load image, convert to grayscale, resize and normalize


image = Image.open(image_path).convert('L')
# Resize to match the input size of the model
image = image.resize((14, 14))
image = np.array(image).astype('float32') / 255
image = image.reshape(1, 196) # Reshape to (1, 196) for model input
return image

Now, we will create a function to create an instance of the agent. The agent is an
extension of the GizaModel class, so we can execute the predict as if we were using a
model, but the agents needs more information:

chain: The chain where the contract and account are deployed
contracts: This is a dictionary in the form of
{"contract_alias": "contract_address"} that contains the contract alias and
address.

This contract_alias is the alias that we will use when executing the contract through
code.
def create_agent(model_id: int, version_id: int, chain: str, contract: str
"""
Create a Giza agent for the MNIST model with MNIST
"""
agent = GizaAgent(
contracts={"mnist": contract},
id=model_id,
version_id=version_id,
chain=chain,
account="my_account1"
)
return agent

This new task will execute the predict method of the agent following the same
format as a GizaModel instance. But in this case, the agent will return an AgentResult
object that contains the prediction result, request id and multiple utilities to handle the
verification of the prediction.

def predict(agent: GizaAgent, image: np.ndarray):


"""
Predict the digit in an image.

Args:
image (np.ndarray): Image to predict.

Returns:
int: Predicted digit.
"""
prediction = agent.predict(
input_feed={"image": image}, verifiable=True
)
return prediction

Once we have the result, we need to access it. The AgentResult object contains the
value attribute that contains the prediction result. In this case, the prediction result is a
number from 0 to 9 that represents the digit that the model predicted.

When we access the value, the execution will be blocked until the proof of the prediction
has been created and once created it is verified. If the proof is not valid, the execution
will raise an exception.
This task is more for didactic purposes, to showcase in the workspaces the time that it
can take to verify the proof. In a real scenario, you can use the AgentResult value
directly in any other task.

def get_digit(prediction: AgentResult):


"""
Get the digit from the prediction.

Args:
prediction (dict): Prediction from the model.

Returns:
int: Predicted digit.
"""
# This will block the executon until the prediction has generated the
return int(prediction.value[0].argmax())

Finally, we will create a task to mint an NFT based on the prediction result. This task will
use the prediction result to mint an NFT with the image of the MNIST digit predicted.

To execute the contract we have the GizaAgent.execute context that will yield all the
initialized contracts and the agent instance. This context will be used to execute the
contract as it handles all the connections to the nodes, networks, and contracts.

In our instantiation of the agent we added an mnist alias to access the contract to ease
its use. For example:

# A context is created that yields all the contracts used in the agent
with agent.execute() as contracts:
# This is how we access the contract
contracs.mnist
# This is how we access the functions of the contract
contracts.mnsit.mint(...)
def execute_contract(agent: GizaAgent, digit: int):
"""
Execute the MNIST contract with the predicted digit to mint a new NFT

Args:
agent (GizaAgent): Giza agent.
digit (int): Predicted digit.

Returns:
str: Transaction hash.
"""
with agent.execute() as contracts:
contract_result = contracts.mnist.mint(digit)
return contract_result

Now that we have all the steps defined, we can create the function to execute.

def mint_nft_with_prediction():
# Preprocess image
image = preprocess_image("seven.png")
# Create Giza agent
agent = create_agent(MODEL_ID, VERSION_ID, CHAIN, MNIST_CONTRACT)
# Predict digit
prediction = predict(agent, image)
# Get digit
digit = get_digit(prediction)
# Execute contract
result = execute_contract(agent, digit)
pprint.pprint(result)

Remember that we should have kept the passphrase in a safe place? Now it is time to use
it. For learning purposes, we will use the passphrase in the code, but in a real scenario,
you should keep it safe and not hardcode it in the code.

# DO NOT COMMIT YOUR PASSPHRASE


import os
os.environ['MY_ACCOUNT1_PASSPHRASE'] = 'a'

Now let's execute mint_nft_with_prediction .

mint_nft_with_prediction()
Using Ape's default networks (chains) relies on public RPC nodes that could hit request
limits and make the execution of the contract fail. For a better experience think about using
a private RPC node with higher quotas

What we have learned


In this tutorial, we learned how to create an AI Agent to mint an MNIST NFT. We created
an AI Agent using Ape's framework and interacted with the agent to mint an NFT based
on the prediction result of an MNIST image.

We learned how to load and preprocess the MNIST image, create an instance of the
agent, predict the MNIST image using the agent, access the prediction result, and mint an
NFT based on the prediction result.
Using Arbitrum with AI Agents
How to interact with Arbitrum using AI Agents

In the previous tutorial we have seen how to use the GizaAgent class to create a simple
agent that can interact with the Ethereum blockchain. In this tutorial we will see how to
use other chains with the GizaAgent class.

As we rely on ape to interact with the blockchain, we can use any chain that ape
supports. The list of supported chains via plugins can be found here.

In this tutorial we will use Arbitrum as an example. The Arbitrum is a layer 2 solution
for Ethereum that provides low cost and fast transactions. The Arbitrum is supported
by ape and we can use it with the GizaAgent class.

Installation
To follow this tutorial, you must first proceed with the following installation.

Handling Python versions with Pyenv

You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.

Install Python 3.11 using pyenv

pyenv install 3.11.0

Set Python 3.11 as local Python version:


pyenv local 3.11.0
Create a virtual environment using Python 3.11:
pyenv virtualenv 3.11.0 my-env

Activate the virtual environment:


pyenv activate my-env

Now, your terminal session will use Python 3.11 for this project.

Install Giza

Install Giza CLI


Install the CLI from PyPi

pip install giza-cli

Install Agent SDK


Install the package from PyPi

pip install giza-agents

You'll find more options for installing Giza in the installation guide.

Install Dependencies

You must also install the following dependencies:

pip install -U scikit-learn torch pandas


Create or Login a user to Giza
From your terminal, create a Giza user through our CLI in order to access the Giza
Platform:

giza users create

After creating your user, log into Giza:

giza users login

Create an API Key for your user in order to not regenerate your access token every few
hours.

giza users create-api-key

Before you begin


You must have a model deployed on Giza. You can follow the tutorial Verifiable MNIST
Neural Network to deploy an MNIST model on Giza.

Create an Ape Account (Wallet)


Before we can create an AI Agent, we need to create an account using Ape's framework.
We can do this by running the following command:

$ ape accounts generate <account name>


Enhance the security of your account by adding additional random input:
Show mnemonic? [Y/n]: n
Create Passphrase to encrypt account:
Repeat for confirmation:
SUCCESS: A new account '0x766867bB2E3E1A6E6245F4930b47E9aF54cEba0C' with H
This will create a new account under $HOME/.ape/accounts using the keyfile structure
from the eth-keyfile library , for more information on the account management, you can
refer to the Ape's framework documentation.

In this account generation we will be prompted to enter a passphrase to encrypt the


account, this passphrase will be used to unlock the account when needed, so make sure
to keep it safe.

We encourage the creation of a new account for each agent, as it will allow you to manage
the agent's permissions and access control more effectively, but importing accounts is also
possible.

Create an Agent
Now that we have a funded account, we can create an AI Agent. We can do this by running
the following command:

giza agents create --endpoint-id <endpoint-id> --name <agent name> --descr

This command will prompt you to choose the account you want to use with the agent,
once you select the account, the agent will be created and you will receive the agent id.
The output will look like this:
[giza][2024-04-10 11:50:24.005] Creating agent ✅
[giza][2024-04-10 11:50:24.006] Using endpoint id to create agent, retriev
[giza][2024-04-10 11:50:53.480] Select an existing account to create the a
[giza][2024-04-10 11:50:53.480] Available accounts are:
┏━━━━━━━━━━━━━┓
┃ Accounts ┃
┡━━━━━━━━━━━━━┩
│ my_account │
└─────────────┘
Enter the account name: my_account
{
"id": 1,
"name": <agent_name>,
"description": <agent_description>,
"parameters": {
"model_id": <model_id>,
"version_id": <version_id>,
"endpoint_id": <endpoint_id>,
"alias": "my_account"
},
"created_date": "2024-04-10T09:51:04.226448",
"last_update": "2024-04-10T09:51:04.226448"
}

This will create an AI Agent that can be used to interact with the deployed MNIST model.

Use Agents in Arbitrum


Let's start by installing the ape-arbitrum plugin.

!ape plugins install arbitrum

We can confirm if it has been installed by executing ape networks list in the terminal.
!ape networks list

arbitrum
├── goerli
│ ├── alchemy
│ └── geth (default)
├── local (default)
│ └── test (default)
├── mainnet
│ ├── alchemy
│ └── geth (default)
└── sepolia
├── alchemy
└── geth (default)
ethereum (default)
├── goerli
│ ├── alchemy
│ └── geth (default)
├── local (default)
│ ├── geth
│ └── test (default)
├── mainnet
│ ├── alchemy
│ └── geth (default)
└── sepolia
├── alchemy
└── geth (default)

Here we can see that we have multiple networks available, including arbitrum . So now
we can use it when instantiating the GizaAgent class.

For this execution, we will use Arbitrum mainnet and use a private RPC node, this is
because public nodes have small quotas that can easily be reached.

The contract is a verified contract selected at random from arbiscan, the mission is to
showcase that we can read properties from this contract, which means that we could also
be able to execute a write function.

In this case, as we are only executing a read function we don't need a funded wallet to do
anything as we won't sign any transactions.

Remember that we will need to specify the <Account>_PASSPHRASE if you are launching
your operation as a script, exporting it will be enough:
Remember that we will need to specify the <Account>_PASSPHRASE if you are launching
your operation as a script, exporting it will be enough:

export <Account>_PASSPHRASE=your-passphrase

If you are using it from a notebook you will need to launch the notebook instance from
an environment with the passphrase variable or set it in the code prior to importing
giza_actions :

import os
os.environ["<Account>_PASSPHRASE"] = "your-passphrase"

from giza.agents import GizaAgent


...

Now we can instantiate the agent:

import os
os.environ["<Account>_PASSPHRASE"] = ...

from giza.agents import GizaAgent

MODEL_ID = ...
VERSION_ID = ...
ACCOUNT = ...
PRIVATE_RPC = ... # This can also be loaded from the environment or a .env

agent = GizaAgent(
id=MODEL_ID,
version_id=VERSION_ID,
chain=f"arbitrum:mainnet:{PRIVATE_RPC}",
account=ACCOUNT,
contracts={
"random": "0x8606d62fD47473Fad2db53Ce7b2B820FdEab7AAF"
}
)
```

Now that we have the agent instance we can enter the execute() context and call the
read function from an Arbitrum smart contract:
with agent.execute() as contracts:
result = contracts.random.name()
print(f"Contract name is: {result}")

What we have learned


This tutorial taught us how to create an AI Agent and interact with Arbitrum.

For this, we needed to:

Install the arbitrum plugin


Check that the new network is available
Got a contract from arbiscan
Use an agent to execute a function from an arbitrum smart contract

Now these same steps can be followed to use any other network supported by ape and
interact with different chains.
Uniswap V3 LP
Rebalancing with AI Agent
This tutorial shows how to use Giza Agents to automatically rebalance a Uniswap V3 LP
position with a verifiable volatility prediction Machine Learning model.

Introduction
Welcome to this step-by-step tutorial on leveraging Zero-Knowledge Machine Learning
(ZKML) for volatility prediction to manage a liquidity position on Uniswap V3. In this
guide, we will walk through the entire process required to set up, deploy, and maintain an
intelligent liquidity management solution using Giza stack. By the end of this tutorial,
you will have a functional system capable of optimizing your liquidity contributions based
on predictive market analysis.

The primary goal here is to use machine learning predictions to adjust your liquidity
position dynamically in response to anticipated market volatility, thereby maximizing
your potential returns and minimizing risks. This project combines concepts from
decentralized finance (DeFi), machine learning, and blockchain privacy enhancements
(specifically Zero-Knowledge Proofs) to create a liquidity providing strategy for Uniswap
V3, one of the most popular decentralized exchanges.

Note: This project constitutes a Proof of Concept and should not be deployed on a
mainnet, since it's likely to underperform. When developing a production-grade liquidity
management system, you should consider multiple factors, such as gas fees or
impermanent loss. For an intorduction into the art of liquidity management on Uniswap
V3, you can refer to the following articles:

A Primer on Uniswap v3 Math: As Easy As 1, 2, v3


[DeFi Math] Uniswap V3 Concentrated Liquidity
Impermanent Loss in Uniswap V3

1. Uniswap V3 Overview
Uniswap V3 is a third version of the Uniswap decentralized exchange. It allows users to
swap between any pair of ERC-20 tokens in a permissionless way. Additionally, it allows
the liquidity providers (LPs) to choose the custom price ranges in order to maximize the
capital efficiency.

In a nutshell, LPs earn a fee on each swap performed within a given pool. When LPs want
to provide the liquidity, they choose a price range (formally, represented in the form of
ticks), within which their deployed capital can be used by other users to perform their
swaps. The tighter these bounds are chosen, the greater the return to the LP can be.
However, if the price deviates outside of those bounds, the LP no longer earns any fees.
Because of that, choosing the price range is of utmost importance for a liquidity provider.

One of the metrics that an LP can use to decide on the most optimal liquidity bounds is
the price volatility of a given pair. If the LP expects a high volatility to occur, they would
deploy their liquidity in a wider range to prevent from the price deviating outside of their
bounds, thus losing them money. Conversely, if the LP expects a low volatility, they would
deploy a tighter-bound position in orderd to increase the capital efficiency and as a
result generate more yield.

In this project we will use the volatility prediction to adjust the width of the LP liquidity
bounds.
2. Setting up Your Development Environment
Python 3.11 must be installed on your machine
giza-sdk should be installed to use giza cli and giza agents. You can install it by
running pip install giza-sdk
You must have an active Giza account. If you don't have one, you can create one here.
Depending on the framework you want to use to develop a volatility prediction model,
you might need to install some external libraries. In this example, we are using torch,
scikit-learn, and pandas. You can install them by running
pip install -U scikit-learn torch pandas

You will also need a funded EOA ethereum address linked to an ape account. You can
follow the creating an account and funding the account parts of our MNIST tutorial to
complete these steps.
Once you have a funded development address, you need to get the tokens you want
to provide the liquidity for. In this example we are using the UNI-WETH pool with 0.3%
fee on ethereum sepolia. You can approve and mint WETH, and swap some of it for
UNI with the get_tokens.py script. Simply execute python get_tokens.py and
follow the prompts in the console. The script will mint 0.0001 WETH and swap half of
it for UNI.
Finally, we will need some environment variables. Create a .env file in the directory of
this project and populate it with these 2 variables:

DEV_PASSPHRASE="<YOUR-APE-ACCOUNT-PASSWORD>"
SEPOLIA_RPC_URL="YOUR-RPC-URL"

We recommend using private RPCs but if you don't have one and want to use a public
one, you can use https://eth-sepolia.g.alchemy.com/v2/demo

3. Building the Volatility Prediction Model


Volatility prediction is a problem almost as old as the modern financial markets. There
have been whole books written on the topic and diving deep into it is outside the scope
of this tutorial.
In this project, we are using a simple multi-layer perceptron to estimate the volatility of a
pair on the next day. After we train our model we need to compile it into the ONNX
format, which will be used in the next step to transpile it into Cairo. This script shows an
example of how this can be achieved. You can simply execute
python model_training.py and the script will download the ETH/USD and UNI/USD
prices, preprocess the data, train a simple neural network with Torch, and save the model
in ONNX format.

4. Deploying Inference Endpoint


First, let's log-in to the giza-cli with giza users login .

After logging in, we need to transpile our ONNX model representation into Cairo. To
achieve this, simply execute
giza transpile --model-id <YOUR-MODEL-ID> --framework CAIRO <PATH-TO-YOU-
ONNX-MODEL> --output-path <YOUR-OUTPUT-PATH>
.

Finally, we can deploy an endpoint that we will use to run our inferences. Execute it with
giza endpoints deploy --model-id <YOUR-MODEL-ID> --version-id <YOUR-
VERSION-ID>
.

5. Creating a Giza Agent


Next, we want to create a GizaAgent that executes the verifiable inference and interacts
with the blockchain. We can do this by running the following command:

giza agents create --model-id <YOUR-MODEL-ID> --version-id <YOUR-VERSION-I

# or if you have the endpoint-id

giza agents create --endpoint-id <ENDPOINT-ID> --name <AGENT-NAME> --descr

This command will prompt you to choose the account you want to use with the agent,
once you select the account, the agent will be created and you will receive the agent id.
The output will look like this:
[giza][2024-04-10 11:50:24.005] Creating agent ✅
[giza][2024-04-10 11:50:24.006] Using endpoint id to create agent, retriev
[giza][2024-04-10 11:50:53.480] Select an existing account to create the a
[giza][2024-04-10 11:50:53.480] Available accounts are:
┏━━━━━━━━━━━━━┓
┃ Accounts ┃
┡━━━━━━━━━━━━━┩
│ my_account │
└─────────────┘
Enter the account name: my_account
{
"id": 1,
"name": <agent_name>,
"description": <agent_description>,
"parameters": {
"model_id": <model_id>,
"version_id": <version_id>,
"endpoint_id": <endpoint_id>,
"alias": "my_account"
},
"created_date": "2024-04-10T09:51:04.226448",
"last_update": "2024-04-10T09:51:04.226448"
}

This will create an AI Agent that can be used to interact with the Uniswap smart
contracts.

6. Defining the on-chain Interactions


In Uniswap V3, each liquidity position is represented as an NFT minted by the
NFTManager. When minting a new position, adding and removing liquidity, or collecting
the fees, we will interact with this contract.

In order to remove the liquidity from an existing position, we need to get the liquidity
amount locked inside of it. Using that value, we can remove all of it from our nft position
and collect all the tokens with all the fees accrued:
def get_pos_liquidity(nft_manager, nft_id):
(
nonce,
operator,
token0,
token1,
fee,
tickLower,
tickUpper,
liquidity,
feeGrowthInside0LastX128,
feeGrowthInside1LastX128,
tokensOwed0,
tokensOwed1,
) = nft_manager.positions(nft_id)
return liquidity

def close_position(user_address, nft_manager, nft_id):


liq = get_pos_liquidity(nft_manager, nft_id)
if liq > 0:
nft_manager.decreaseLiquidity((nft_id, liq, 0, 0, int(time.time()
nft_manager.collect((nft_id, user_address, MAX_UINT_128, MAX_UINT_

Before minting a new position, we need to know what is the current pool price. On
Uniswap V3, this value is represented as a tick. If you need a refresher on Uniswap V3
maths, we recommend this article. Next, using the result of our volatility model, we want
to calculate the lower and upper tick of the new liquidity position. Finally, we will prepare
the parameters for the minting transaction. The source code for the helper functions
used can be found here

_, curr_tick, _, _, _, _, _ = contracts.pool.slot0()

tokenA_decimals = contracts.tokenA.decimals()
tokenB_decimals = contracts.tokenB.decimals()
lower_tick, upper_tick = get_tick_range(
curr_tick, predicted_value, tokenA_decimals, tokenB_decimals, pool_fee
)
mint_params = get_mint_params(
user_address, tokenA_amount, tokenB_amount, pool_fee, lower_tick, uppe
)

Finally, we can mint a new position by calling the mint functon on the NFTManager smart
contract
contract_result = contracts.nft_manager.mint(mint_params)

7. Defining the Execution Flow


Now we will use the giza-actions sdk to develop our AI Agent and adjust the LP position.
We need to implement the following steps:

Fetch all the required addresses


Fetch the input data to our volatility model
Create the AI Agent instance
Run verifiable inference
Close the previous LP position
Compute the data to mint a new position
Approve the NFTManager to spend the tokens
Mint a new LP position

All the code can be found in this script. An example of how to define a prediction task,
create an AI Agent, and mint an NFT representing an LP position:
from giza.agents import GizaAgent

def predict(agent: GizaAgent, X: np.ndarray):


"""
Predict the next day volatility.

Args:
X (np.ndarray): Input to the model.

Returns:
int: Predicted value.
"""
prediction = agent.predict(input_feed={"val": X}, verifiable=True, job
return prediction

def create_agent(
model_id: int, version_id: int, chain: str, contracts: dict, account:
):
"""
Create a Giza agent for the volatility prediction model
"""
agent = GizaAgent(
contracts=contracts,
id=model_id,
version_id=version_id,
chain=chain,
account=account,
)
return agent

def rebalance_lp(
tokenA_amount,
tokenB_amount,
pred_model_id,
pred_version_id,
account="dev",
chain=f"ethereum:sepolia:{sepolia_rpc_url}",
nft_id=None,
):
...
agent = create_agent(
model_id=pred_model_id,
version_id=pred_version_id,
chain=chain,
contracts=contracts,
account=account,
)
result = predict(agent, X)
p g
with agent.execute() as contracts:
...
contract_result = contracts.nft_manager.mint(mint_params)

8. Running the AI Agent


Finally, we can execute our script with the desired parameters:

python action_agent.py --model-id <YOUR-MODEL-ID> --version-id <YOUR-VERSI

You should see the transaction receipt representing minting of a new LP position on
Uniswap V3. Congrats, you have just used an AI Agent to provide liquidity on a
decentralized exchange!

9. Conclusion
In this tutorial, we learnt how to use the Giza SDK to create an AI Agent that
automatically rebalances our Uniswap V3 LP position. The next steps would include
iterating on the volatility prediction model and refining the rebalancing logic to improve
the agent's performance.

You might also like