Giza Docs
Giza Docs
Welcome
Giza
Transpile
hub = DatasetsHub()
🌱 Where to start?
⏩ Quickstart 🧱 Products
Learn the basics to get started using Giza Consult the documentation for all Giza
Platform, AI Agents and Datatsets SDK. products.
🧑🎓️ Tutorials
Learn by doing, with many tutorials.
Installation
How to install Giza ecosystem,
To use Giza ecosystem, you'll need Giza CLI and Giza Agents SDK.
You should install Giza tools in a virtual environment. If you’re unfamiliar with Python
virtual environments, take a look at this guide. A virtual environment makes it easier to
manage different projects and avoid compatibility issues between dependencies.
Now, your terminal session will use Python 3.11 for this project.
This command installs the bleeding edge main version rather than the latest
stable version. The main version is useful for staying up-to-date with the
latest developments. For instance, if a bug has been fixed since the last
official release but a new release hasn’t been rolled out yet. However, this
means the main version may not always be stable. We strive to keep the
main version operational, and most issues are usually resolved within a few
hours or a day. If you run into a problem, please open an Issue so we can fix it
even sooner!
Once installed, you can use each of these packages independently under the Giza
namespace. Some examples might be:
from giza.datasets import DatasetsLoader
from giza.agents import GizaAgent
from giza.zkcook import serialize_model
giza --help
Install CLI
The Giza CLI facilitates user creation and login, transpiles models into ZK models, and
deploys a verifiable inference endpoint. These steps are essential for creating an Agent.
This command installs the bleeding edge main version rather than the latest
stable version. The main version is useful for staying up-to-date with the
latest developments. For instance, if a bug has been fixed since the last
official release but a new release hasn’t been rolled out yet. However, this
means the main version may not always be stable. We strive to keep the
main version operational, and most issues are usually resolved within a few
hours or a day. If you run into a problem, please open an Issue so we can fix it
even sooner!
You must keep the giza-agents folder if you want to keep using the library.
Now you can easily update your clone to the latest version of Agents
⚡
and Actions with the following command:
cd ~/giza-agents/
git pull
Your Python environment will find the main version of ⚡ Actions on the
next run.
Clone the repository and install giza-datasets with the following commands:
These commands will link the folder you cloned the repository to and your
Python library paths. Python will now look inside the folder you cloned to in
addition to the normal library paths. For example, if your Python packages
are typically installed in
~/anaconda3/envs/main/lib/python3.7/site-packages/ , Python will also
search the folder you cloned to: ~/datasets/ .
You must keep the datasets folder if you want to keep using the library.
Now you can easily update your clone to the latest version of giza-datasets
with the following command:
cd ~/datasets/
git pull
Your Python environment will find the main version of giza-datasets on the
next run.
Quickstart
In this resource, you'll learn the basics to get started using Giza products!
Before you begin, make sure you have all the necessary libraries installed.
Installation Guide
From your terminal, create a Giza user through our CLI in order to access the
Giza Platform:
Optional: you can create an API Key for your user in order to not regenerate
your access token every few hours.
This step is only necessary if you have not yet deployed an verifiable
inference endpoint.
Transpilation
Transpilation is a crucial process in the deployment of Verifiable Machine
Learning models. It involves the transformation of a model into a Cairo model.
These models can generate ZK proofs.
The transpilation process starts by reading the model from the specified
path. The model is then sent for transpilation.
❗ It will ask you for a passphrase, make sure to save it in a safe place as it
will be used to unlock the account when signing.
We encourage the creation of a new account for each agent, as it will allow
you to manage the agent's permissions and access control more effectively,
but importing accounts is also possible.
If you are using Sepolia testnet, you can get some testnet ETH from a faucet
like Alchemy Sepolia Faucet or LearnWeb3 Faucet. If you are on mainnet you
will need to transfer funds to it.
This command will prompt you to use an ape account, specify the one that
you want to use and it will create the agent.
agent = GizaAgent(
id=<model-id>,
version_id=<version-id>,
contracts={
# Contracts uses <alias>:<contract-address> dict
"mnist": "0x17807a00bE76716B91d5ba1232dd1647c4414912",
"token": "0xeF7cCAE97ea69F5CdC89e496b0eDa2687C95D93B",
},
# Specify the chain that you are using
chain="ethereum:sepolia:geth",
)
This section is mainly intended for developers who are already accustomed to
fundamentals of Python, as well as its common ML libraries and frameworks.
1. Import giza-datasets
from giza.datasets import DatasetsHub, DatasetsLoader
os.environ['SSL_CERT_FILE'] = certifi.where()
With the DatasetsHub() object, we can know query the DatasetsHub to find
the perfect dataset for our ML model. See DatasetsHub for further
instructions. Alternatively, you can check DatasetsHub pages to explore the
available datasets from your browser.
Lets use the list_tags() function to list all the tags and then
get_by_tag() to query all the datasets with the "Yearn-v2" tag.
print(hub.list_tags())
Yearn-v2 looks interesting, lets search all the datasets that have the 'Yearn-
v2' tag.
datasets = hub.get_by_tag('Yearn-v2')
df = loader.load('yearn-individual-deposits')
df.head()
shape: (5, 7)
Keep in mind that giza-datasets uses Polars (and not Pandas) as the
underlying DataFrame library.
Perfect, the Dataset is loaded correctly and ready to go! Now we can use our
preferred ML Framework and start building.
Contribution Guidelines
We strongly encourage our users to create tutorials for their open-source ML
projects using the Giza Platform and its components. When your PR for a new
tutorial in Giza Hub gets merged, you become eligible for OnlyDust grants!
For consistency and clarity, we ask all user tutorials to follow certain criteria and
guidelines, which will be described below. Tutorials such as the MNIST Model are great
examples of content and style.
Giza Tutorials
Giza tutorials are meant to be complete, open-source projects built by the Giza Data
Science team and Giza Users, to showcase the use of the Giza Framework in an end-to-
end setting, as well as to illustrate potential ZKML use cases and inspire new ideas. Given
the relative immaturity of the ZKML ecosystem, we believe there are many ZKML use
cases yet to be discovered and we welcome everyone to be a part of the first explorers!
General
Not all good ML use cases are good ZKML use cases. Given the additional
complexity and costs, verifiability of the model inferences must be a requirement or
must provide a significant advantage for the given task. As Giza, we expect all
submitted tutorials to have clear ZKML use cases.
Tutorials are created within Python Virtual Environments and are reproducible, using
a requirements.txt , poetry.lock in the root dir.
(If the project uses data that is not available through Giza Datasets) : Project contains
data collection scripts to load/fetch the data inside the project. Do not push data
files in Giza Hub Repo.
Similarly, do not push any model files. Model files include .pt, .onnx, or verifiable
models Cairo.
Model Development
Given some of the current restrictions in model complexity, it is expected to have low
performance for some of the ZKML models. However, we expect all projects to have a
fitting set of performance metrics given the task, a section as part of the
documentation on how to interpret them, as well as steps that can potentially be
taken from a model architecture point of view to improve the model performance.
We expect the model development to be well documented and follow the general data
science conventions and order of operations.
Readme
The project repository must contain a readme file with the following sections:
We strongly encourage the tutorial authors to use code snippets for the Project Installation,
Overview of Model Development, Model Performance, and Giza Integration Sections.
Showing individual commands and lines of code is always more informative than
instructions.
Problem Setting - Introduce the task and the problem setting. Why is ZKML
useful/necessary? Are there any papers/resources that can be read to learn more
about the problem?
Project Installation - How can another developer reproduce the project, install
dependencies etc.
Overview of Model Development - The crucial steps of the model development
process. Not every step is necessary, but model architecture and/or model training
scheme probably are.
Model Performance - The description of the model performance metric, as well as the
measurements from the developed model from the testing. (Here is a good point to
also discuss possible improvements to the model architecture.)
Giza Integration - How can we use Giza CLI & giza-agents to make the model
verifiable? What are the individual Giza components that are used along the way?
Affiliations - If affiliated with any team/organization, make sure to have that it be
stated in the readme for clarity.
How to contribute?
After making sure your tutorials follow the guidelines above, simply create a new branch
in Giza-Hub Github repository, and push your model under awesome-giza-agents folder.
If your tutorial is original, reproducible, fits the guidelines and is a valuable addition to
our zkML use-case collection, you will be eligible for OnlyDust rewards!
Products
All Products
Platform AI Agents
Automate and streamline processes for Build agents that automate smart strategies
zero-knowledge machine learning. on top of decentralized protocols.
Datasets
Curated, structured, labeled blockchain data
to train and build your machine learning
models.
Platform
Resources
Users
Manage users.
Create
Login
Create API Key
Me
Create a user
Allows to create a user using the CLI. The username must be unique and the email
account should not have been used previously for another user.
If there is an error or you want to have more information about what it's going on there is
a --debug flag that will add more information about the error. This will print outgoing
requests to the API, debug logs and python traceback about what happened.
⚠️Note: be aware that the debug option will print everything that its going to the API, in
this case the password will be printed as plain text in the terminal, if you are using the
debug option to fill an issue make sure to remove the credentials.
Login a user
Log into Giza platfrom and retrieve a JWT for authentication. This JWT will be stored to
authenticate you later until the token expires.
Now you can use the API key to authenticate yourself withouth the need of login again
and again.
NOTE: The usage of API key is less secure than JWT, so use it with caution.
In Giza Plateform, a model represents a container for versions of your machine learning
model. This design allows you to iterate and improve your ML model by creating new
versions for it. Each model can have multiple versions, providing a robust and flexible
way to manage the evolution of your ML models. This traceability feature ensures that
you have a clear record of the original model used for transpilation, who performed the
transpilation, and the output generated.
Create a model
List models
Retrieve model information
Transpile a model
Create a Model
Creating a new model in Giza is a straightforward process. You only need to provide a
name for the model using the --name option. You can also add a description of the
model using the --description option. This can be helpful for keeping track of
different models and their purposes.
> giza models create --name my_new_model --description "A New Model"
[giza][2023-09-13 13:24:28.223] Creating model ✅
{
"id": 1,
"name": "my_new_model",
"description": "A New Model"
}
Typically, the transpile command is used to handle model creation. During this
process, the filename is checked for an existing model. If none is found, a new model is
automatically created. However, manual creation of a model is also supported. For more
information, refer to the transpile documentation.
List Models
Giza provides a simple and efficient way to list all the models you have stored on the
server. This feature is especially useful when you have multiple models and need to
manage them effectively.
To list all your models, you can use the list command. This command retrieves and
displays a list of all models stored in the server. Each model's information is printed in a
json format for easy readability and further processing.
> giza models get --model-id 1 # When we create model for you we output i
[giza][2023-09-13 13:17:53.594] Retrieving model information ✅
{
"id": 1,
"name": "my_new_model",
"description": "A New Model"
}
Transpile a model
Note: This is explained extensively in the transpile documentation.
When you execute the 'transpile' command, it initially checks for the presence of the
model on the Giza platform. If no model is found, it automatically generates one and
performs the transpilation. Here is an example of the command:
It's worth noting that if you already have created a model, you can transpile it by
specifying the model ID:
This method ise useful when you intend to create a new version of an existing model.
For more information, refer to the transpile documentation.
Versions
Manage your model versions.
In Giza Platform, a version represents a specific iteration of your machine learning model
within a Giza Model. This design allows you to iterate and improve your ML model by
creating new versions for it. Each model can have multiple versions, providing a robust
and flexible way to manage the evolution of your ML models. This traceability feature
ensures that you have a clear record of each version of your model.
List Versions
Giza Platform provides a simple and efficient way to list all the versions of a specific
model you have. This feature is especially useful when you have multiple versions of a
model and need to manage them effectively.
To list all your versions of a model, you can use the list command. Each version's
information is printed in a json format for easy readability and further processing.
Transpiling a model version in Giza Platform is a crucial step in the model deployment
process as an endpoint. Transpilation is the process of converting your machine learning
model into a format that can be executed on Giza Platform.
When you transpile a model, you're essentially creating a new version of that model. Each
version represents a specific iteration of your machine learning model, allowing you to
track and manage the evolution of your models effectively.
> giza versions transpile awesome_model.onnx --output-path my_awesome_mode
[giza][2023-09-13 12:56:43.725] No model id provided, checking if model ex
[giza][2023-09-13 12:56:43.726] Model name is: awesome_model
[giza][2023-09-13 12:56:43.978] Model Created with id -> 1! ✅
[giza][2023-09-13 12:56:44.568] Sending model for transpilation ✅
[giza][2023-09-13 12:56:55.577] Transpilation recieved! ✅
[giza][2023-09-13 12:56:55.583] Transpilation saved at: cairo_model
Once the transpilation process is complete, a new version of the model is created in Giza
Platform. The version will be downloaded and saved at the specified output path, but you
can also execute later with the download command to download it again.
It first checks if a model with the specified name already exists on Giza
Platform. If not, it creates a new model and then transpiles it.
The output of this process is saved in the cairo_model/ folder by
default, but you can specify a different output path using the
--output-path option.
The result of the transpilation process is saved at the provided path, in this
case, my_awesome_model/ .
A model is fully compatible when all the operators that the model uses are
supported by the Transpiler, if this happens the model is compiled after
transpilation and we save the compiled file on behalf of the user to use later
for endpoint deployment (endpoint docs). This will be shown in the output of
the transpile command:
[WARN][2024-02-07 16:42:31.209] 🔎
Transpilation is partially
supported. Some operators are not yet supported in the
Transpiler/Orion
[WARN][2024-02-07 16:42:31.211] Please check the compatibility
list in Orion:
https://cli.gizatech.xyz/frameworks/cairo/transpile#supported-
operators
1. First, you create a model manually using the giza models create
command.
2. After the model is created, you can transpile it using the
giza transpile --model-id ...
If you have a previously created model, you can transpile it by indicating the
model-id in the giza transpile --model-id ... or
giza versions transpile --model-id command.
This method is useful when you want to create a new version of an existing
model.
The output of the transpilation process is saved in the same location as
the original model.
# Using the previous model (id: 2) we can transpile a new model,
giza transpile --model-id 29 awesome_model.onnx --output-path ne
[giza][2023-09-13 14:11:30.015] Model found with id -> 2! ✅
[giza][2024-02-07 14:11:30.225] Version Created with id -> 2! ✅
[giza][2023-09-13 14:11:30.541] Sending model for transpilation
[giza][2023-09-13 14:11:41.601] Transpilation recieved! ✅
[giza][2023-09-13 14:11:41.609] Transpilation saved at: new_awes
Transpilation Results
When a version is transpiled the version can be in the following statuses:
We try to support all the available operators on Orion but there might be a little lag between
Orion's implementation and transpilation availability
This version has some operators that are not available in the transpilation, but they might
be supported in Orion. When a model is not fully compatible, in the
inference/lib.cairo a comment will be shown:
Let's say that LogSoftMax is the unsupported operator, if we check the Orion
Documentation, we can see that it is supported. Now we could add the necessary code to
add our operator (including imports):
LogSoftMax serves as an example and does not mean that it is not currently supported
After the manual implementation, we can trigger the update with the update command:
We want to update the first version of the first model with our new code, the code is at
--model-path cairo_model
The version has been updated successfully! Now we have a fully compatible model that
generated a sierra and can be easily deployed! Now the version will be frozen so it won't
allow for any more updates.
To check the current models and versions that have been created, you can use the
following steps:
1. Use the giza models list command to list all the models that have been created.
2. For each model, you can use the giza versions list --model-id ... command
to list all the versions of that model.
Remember, each version represents a specific transpilation of the model. So, if you have
made changes to your machine learning model and transpiled it again, it will create a
new version.
This system of models and versions allows you to manage and keep track of the
evolution of your machine learning models over time.
For example, let's say you have created a model called awesome_model and transpiled it
twice. This will create two versions of the model, version 1 and version 2. You can check
the status of these versions using the giza versions list --model-id ...
command.
giza versions list --model-id 29
[giza][2023-09-13 14:17:08.006] Listing versions for the model ✅
[
{
"version": 1,
"size": 52735,
"status": "COMPLETED",
"message": "Transpilation Successful!",
"description": "Initial version",
"created_date": "2023-09-13T12:08:38.177605",
"last_update": "2023-09-13T12:08:43.986137"
},
{
"version": 2,
"size": 52735,
"status": "COMPLETED",
"message": "Transpilation Successful!",
"description": "Initial version",
"created_date": "2023-09-13T12:11:30.165440",
"last_update": "2023-09-13T12:11:31.625834"
}
]
Endpoints
Endpoints in our platform provide a mechanism for creating services that accept
predictions via a designated endpoint. These services, based on existing platform
versions, leverage Cairo under the hood to ensure provable inferences. Using the CLI,
users can effortlessly deploy and retrieve information about these machine learning
services.
To create a new service, users can employ the deploy command. This command
facilitates the deployment of a verifiable machine learning service ready to accept
predictions at the /cairo_run endpoint, providing a straightforward method for
deploying and utilizing machine learning capabilities.
If a model is fully compatible the sierra file is not needed and can be deployed without
using it in the command:
Example request
Now our service is ready to accept predictions at the provided endpoint URL. To test this,
we can use the curl command to send a POST request to the endpoint with a sample
input.
Listing endpoints
The list command is designed to retrieve information about all existing endpoints. It
provides an overview of the deployed machine learning services, allowing users to
monitor and manage multiple endpoints efficiently.
Executing this command will display a list of all current endpoints, including relevant
details such as service names, version numbers, and endpoint status.
To list only active endpoints you can use the flag --only-active/-a so only active ones
are shown.
Retrieving an endpoint
For retrieving detailed information about a specific endpoint, users can utilize the get
command. This command allows users to query and view specific details of a single
endpoint, providing insights into the configuration, status, and other pertinent
information.
Delete an endpoint
For deleting an endpoint, users can use the delete command. This command facilitates
the removal of a machine learning service, allowing users to manage and maintain their
deployed services efficiently.
> giza endpoints delete --endpoint-id 1
>>>>
[giza][2024-03-06 18:10:22.548] Deleting endpoint 1 ✅
[giza][2024-03-06 18:10:22.830] Endpoint 1 deleted ✅
The endpoints are not fully deleted, so you can still access the underlying proofs generated
by them.
Verify a proof
After successfully creating a proof for your Orion Cairo model, the next step is to verify
its validity. Giza offers a verification method using the verify command alongside the
endpoint-id and proof-id .
> giza endpoints verify --endpoint-id 1 --proof-id "b14bfbcf250b404192765d
[giza][2024-02-20 15:40:48.560] Verifying proof...
[giza][2024-02-20 15:40:49.288] Verification result: True
[giza][2024-02-20 15:40:49.288] Verification time: 2.363822541
Endpoint Sizes
Size vCPU Memory (GB)
S 2 2
M 4 4
L 4 8
XL 8 16
Agents
Agents are entities designed to assist users in interacting with Smart Contracts by
managing the proof verification of verifiable ML models and executing these contracts
using Ape's framework.
Agents serve as intermediaries between users and Smart Contracts, facilitating seamless
interaction with verifiable ML models and executing associated contracts. They handle
the verification of proofs, ensuring the integrity and authenticity of data used in contract
execution.
Creating an agent
Listing agents
Retrieving an agent
Updating an agent
Deleting an agent
More information
Creating an agent
To create an agent, first you need to have an endpoint already deployed and an ape
account created locally. If you have not yet deployed an endpoint, please refer to the
endpoints documentation. To create the ape account, you can use the
ape accounts generate command:
To create an agent, users can employ the create command. This command facilitates
the creation of an agent, allowing users to interact with deployed endpoints and execute
associated contracts.
During the creation you will be asked to select an account to create the agent. The
account is used to sign the transactions in the smart contracts.
An Agent can also be created using the --endpoint-id flag, which allows users to
specify the endpoint ID directly.
Retrieving an agent
For retrieving detailed information about a specific agent, users can utilize the get
command. This command allows users view the details of a specific agent:
{
"id": 1,
"name": "Agent one",
"description": "Agent to handle liquidity pools",
"parameters": {
"model_id": 1,
"version_id": 1,
"endpoint_id": 1,
"account": "awesome_account",
},
"created_date": "2024-04-09T15:07:14.282177",
"last_update": "2024-04-10T10:06:36.928941"
}
Updating an agent
To update an agent, users can use the update command. This command facilitates the
modification of an agent, allowing users to update the agent's name, description, and
parameters.
> giza agents update --agent-id 1 --name "Agent one updated" --description
{
"id": 1,
"name": "Agent one updated",
"description": "Agent to handle liquidity pools updated",
"parameters": {
"model_id": 1,
"version_id": 1,
"endpoint_id": 1,
"chain": "ethereum:mainnet:geth",
"account": "awesome_account",
},
"created_date": "2024-04-10T09:51:04.226448",
"last_update": "2024-04-10T10:37:28.285500"
}
The parameters can be updated using the --parameters flag, which allows users to
specify the parameters to be updated.
The --parameters flag can be used multiple times to update multiple parameters and
expects a key-value pair separated by an equal sign, parameter_key=parameter_value .
Delete an agent
For deleting an Agent, users can use the delete command. This command will erase
any related data to the agent.
Running Inference
To run inference, use the /cairo_run endpoint of your deployed model's
URL. For example:
https://deployment-gizabrain-38-1-53427f44-dagsgas-ew.a.run.app/
This action will execute the inference, generate Trace and Memory files on
the platform, and initiate a proving job. The inference process will return the
output result along with a request ID.
Alternatively, you can prove a model directly using the CLI without deploying
the model for inference. This method requires providing Trace and Memory
files, which can only be obtained by running CairoVM in proof mode.
This option is not recommended because of the need to deal with CairoVM. If
you opt for this method, ensure you use the following commit of CairoVM:
1a78237 .
Job Size
When generating a proof we can choose the size of the underlying job:
S 4 16
M 8 32
L 16 64
XL 30 120
Verify
After successfully creating a proof for your Orion Cairo model, the next step is to verify
its validity.
Upon successful submission, you will see a confirmation message indicating that the
verification procces has begun. Once the verification process is complete, a success
message will confirm the validity of the proof:
Operator Implemented
Abs ✅
Acos ✅
Acosh ✅
Operator Implemented
Add ✅
And ✅
Asin ✅
Asinh ✅
Atan ✅
ArgMax ✅
ArgMin ✅
Cast ✅
Concat ✅
Constant ✅
Div ✅
Equal ✅
GatherElements ✅
Gather ✅
Gemm ✅
LinearClassifier ✅
LinearRegressor ✅
Less ✅
LessOrEqual ✅
MatMul ✅
Operator Implemented
Mul ✅
ReduceSum ✅
Relu ✅
Reshape ✅
Sigmoid ✅
Slice ✅
Softmax ✅
Squeeze ✅
Sub ✅
Unsqueeze
Known Limitations
Transpilation is failing due to memory
This can happen for two reasons:
The provided model is so big that the transpilation runs out of memory. When the
model is transpiled, due to a Cairo limitation we need to embed the data as a Cairo
file, these files are much bigger than a binary (like onnx) which means that we won't
be able to run them as they will consume all our memory when running the Cairo
code.
When we have a fully compatible model, we compile it on the users behalf to generate
the sierra file in order to deploy it later, this compilation uses a lot of memory and can
lead Out of Memory error (OOM), which will also mean that we won't be able to run it
Transpilation is failing
When a transpilation fails the logs of the transpilation are returned to give more
information about what its happening. If there is an unhandled error please reach a
developer and provide the logs.
S 4 16
M 8 32
L 16 64
XL 30 120
Size Quota
S 12
M 8
L 6
XL 4
When using an endpoint remember that a dry_run= argument exists in the predict
function so proof creation is not triggered every single time which is useful for
development.
Proving time is X but Job took much more time
When a proving job is executed we need to gather computing resources to launch the job.
For bigger sizes gathering these resources takes more time than smaller sizes, so the
time to just spin up the job can take up to 5 minutes for sizes L and XL where sizes
S and M should just take a couple of seconds to start.
When using the predict method of a GizaModel if this error is raise it can be for 2
reasons:
Out of Memory: when running the cairo code the service runs out of memory of the
service is killed, thus being "Unavailable". Try to delete the endpoint and create one
with a higher size, check Endpoints. If it persist, the model is to big to being run.
The shape of the input is not in the expected shape, this usually happens when we
pass a numpy array with a shape of (x, ) , make sure that we have both shapes and
not just one (x, 1) so it can be serialized by the service
Now there is a command giza endpoints logs --endpoint-id X to retrieve the logs
and help to identify which one of that happened. For the first, a log message of killed
should be shown and for the second a deserialization ERROR
Operating System
Python version
Giza CLI/Agents-sdk version
Giza command used
Python traceback
request_id if returned
Context
Operating System
Python version
Giza CLI/Agents-sdk version
Giza command used
Python traceback
request_id if returned
Context
AI Agents
Easy to use Verifiable AI and smart contracts interoperability.
Giza Agents is a framework for trust-minimized integration of machine learning into on-
chain strategy and action, featuring mechanisms for agentic memory and reflection that
improve performance over their lifecycle.
The extensible nature of Giza Agents allows developers to enshrine custom strategies
using ML and other algorithms, develop novel agent functionalities and manage
continuous iteration processes.
🌱 Where to start?
⏩ Quickstart 💡 Concepts
A quick resource to give you an initial idea of Everything you need to know about Agents
how AI Agents work.
See how to achieve a specific goal using A great place to start if you’re a beginner.
Agents This section will help you gain the basic
skills you need to start using Agents.
Concepts
Thesis
Intelligence is the arbiter of power, culture and the means of life on earth.
Despite its pervasive influence, a precise definition of intelligence remains elusive. Most
scholars, however, emphasize the incorporation of learning and reasoning into action.
This focus on the integration of learning and action has largely defined the field of
Artificial Intelligence, with some early researchers explicitly framing the task of AI as the
creation of intelligent agents capable of “perceiving [their] environment through sensors
and acting upon that environment through effectors.”2
Yet the current environment in which these AI Agents operate is fragmented and almost
exclusively defined by private interest. These dynamics impede the potential for societal
value through collective experimentation and innovation. Such an approach requires an
open, persistent environment with self-enforcing standards and a shared medium for
communicating and transacting. Without such an environment, Agents will remain gated
and constrained private instruments, trapped in designated niches and narrow interest
groups.
Thankfully, Web3 has been building this vision of an immutable and shared digital
environment for nearly a decade. However, the value it has created is largely limited due
to several pressures:
To transcend these pressures and the adoption bottleneck, Web3 needs a new interface
paradigm: one which prioritizes user preferences and abstracts the complexities involved
in interacting with intricate technical and financial logic.
“Wallet-enabled agents can use any smart contract service or platform, from
infrastructure services to DeFi protocols to social networks, which opens a whole
universe of new capabilities and business models. An agent could pay for its own
resources as needed, whether it’s computation or information. It could trade tokens on
decentralized exchanges to access different services or leverage DeFi protocols to
optimize its financial operations. It could vote in DAOs, or charge tokens for its
functionality and trade information for money with other specialized agents. The result is
a vast, complex economy of specialized AI agents talking to each other over
decentralized messaging protocols and trading information onchain while covering the
necessary costs. It’s impossible to do this in the traditional financial system.” — Joel
Monegro, AI Belongs Onchain
By adopting this design pattern for bridging ML to Web3, Giza is enabling scalable
integration of verified inferencing to on-chain applications. ML models are converted into
ZK circuits, enabling their predictions to be integrated with on-chain applications
conditional on proof verification. This allows for performant computation of ML models
off-chain and trust-minimized execution of on-chain applications.
Verifier: the gating function of the Agent, validating trust guarantees for Agent inputs
Intent Module: handles arbitrary strategies informed by ML predictions
Wallet: Agent-controlled wallet for on-chain transactions
Memory: Agent-specific dataset parsed through observing Agent actions and their
impact on the state
These modules are accompanied by an extensible app framework that allows developers
to serve arbitrary Agent functionality to other users of Giza Agents.
Agent
Agents are entities designed to assist users in interacting with Smart Contracts by
managing the proof verification of verifiable ML models and executing these contracts
using Ape's framework.
Agents are intermediaries between users and Smart Contracts, facilitating seamless
interaction with verifiable ML models and executing associated contracts. They handle
the verification of proofs, ensuring the integrity and authenticity of data used in contract
execution.
With an Agent, we can easily use a prediction of a verifiable model and using our custom
logic execute a smart contract.
For more information on CLI capabilities with agents check the CLI documentation for
agents.
Account
An account is a wallet for the chain that we want to interact with. For now, an Agent can
be used in any number of contracts within the same account.
This account is an Ape account so anything that you could do like importing or creating
an account with Ape's framework is available.
The account is needed to sign the transactions of the contracts without human
intervention, as this account is created in an encrypted keyfile, you must have created it
with a passphrase. This passphrase is used to unlock this account.
This passphrase should be in the environment of the execution so the agent can retrieve
it and unlock the account. The variable should be exported as <ACCOUNT>_PASSPHRASE
variable. It is encouraged to have it as an environment variable so it is not committed to
the code, which can be easily forgotten and published in a public repository.
Contracts
A contract refers to a smart contract deployed on the chain of choice. This contract
access and signing will be handled by the Agent.
The Agent auto-signs all the transactions required by the specified contracts.
How-To-Guides
You'll find various guides to using and create AI Agents.
Agent Usage
Create an Account (Wallet)
Before we can create an AI Agent, we need to create an account using Ape's framework.
We can do this by running the following command:
This will create a new account under $HOME/.ape/accounts using the keyfile structure
from the eth-keyfile library , for more information on the account management, you can
refer to the Ape's framework documentation.
We encourage the creation of a new account for each agent, as it will allow you to
manage the agent's permissions and access control more effectively, but importing
accounts is also possible.
For development we encourage the usage of testnets and thus the funding via faucets.
contracts = {
"alias_one": "0x1234567",
"alias_two": "0x1234567",
}
agent = GizaAgent(
...,
contracts=contracts,
)
This way when we want to use a contract within an agent we can easily access it using
Python's dot notation, like the following example
It is mandatory to use the contracts inside the with statement of the execute()
command so contracts are available and loaded.
Also, you are responsible for handling what function to use and the parameters that it
needs, as we only provide the abstraction of the execution.
Agent Usage
An Agent is an extension of a GizaModel, it offers the same capabillities with extra
functionalities like, account and contracts handling.
id : model identifier.
version_id : version identifier.
chain : chain where the smart contracts are deployed, to check what chains are
available you can run àpe networks list̀.
contracts : this is the definition of the contracts as a dictionary, more here.
account : this is an optional field indicating the account to use, if it is not provided
we will use the account specified at creation but otherwise we will use the specified
one if it exists locally.
agent = GizaAgent(
contracts=<contracts-dict>,
id=<model-id>,
version_id=<version-id>,
chain=<chain>,
account=<account>,
)
There is another way to instantiate an agent using the agent-id received on the
creation request with the CLI:
agent = GizaAgent.from_id(
id=<agent-id>,
contracts=<contracts-dict>,
chain=<chain>,
account=<account>,
)
Generate a verifiable prediction
This Agent brings an improved predict function which encapsulates the result in an
AgentResult class. This is the class responsible for handling and checking the status of
the proof related to the prediction, when the value is accessed, the execution is blocked
until the proof is generated and verified. It looks the same as the GizaModel.predict :
result = agent.predict(
input_feed=<input-data>,
verifiable=True,
)
The verifiable argument should be True to force the proof creation, it is kept here
for compatibility with GizaModel .
result = agent.predict(
input_feed=<input-data>,
verifiable=True,
)
# This will access the value, if not verified it waits until verification
result.value
result = agent.predict(
input_feed=<input-data>,
verifiable=True,
dry_run=True,
)
Execute Contracts
The Agent handles the execution and handling of the contracts that have been specified
in the contracts attribute. Under the hood, the agent gets the information about the
contract and allows for the execution of the contract in an easy way.
These contracts must be executed inside the execute() context, which handles the
contract's instantiation as well as the signing of the contracts:
from giza.agents import GizaAgent
contracts_dict = {
"lp": ...,
}
agent = GizaAgent.from_id(
id=<agent-id>,
contracts=<contracts-dict>,
chain=<chain>,
account=<account>,
)
result = agent.predict(
input_feed=<input-data>,
verifiable=True,
)
agent = GizaAgent(
id=MODEL_ID,
version_id=VERSION_ID,
chain=f"arbitrum:mainnet:{PRIVATE_RPC}",
account=ACCOUNT,
contracts={
...
}
)
Effortlessly access a rich array of blockchain datasets with a single line of code, and
prepare your dataset for use in sophisticated ML models. Built on a foundation that
supports efficient handling of large-scale data, our SDK ensures optimal performance
with minimal memory overhead, enabling seamless processing of extensive blockchain
datasets. Additionally, our SDK is deeply integrated with the Giza Datasets Hub, offering
a straightforward platform for both accessing a diverse range of datasets enriching the
ML and blockchain community's resources.
Join us in advancing the frontier of blockchain-based ML solutions with the Giza Datasets
SDK, where innovation meets practicality.
🌱 Where to start?
⏩ Quickstart 📚 How-To Guides
If you are already familiar with similar Dive into the fundamentals and acquaint
libraries, here is a fast guide to get you yourself with the process of loading,
started. accessing, and processing datasets. This is
your starting point if you are exploring for
the first time.
🏛️ Hub
Your one-stop destination for a wide range
of blockchain datasets. Dive into this rich
repository to find, share, and contribute
datasets, enhancing your ML projects with
the best in blockchain data.
How-To-Guides
Welcome to the Datasets SDK how-to-guides! These beginner-friendly guides will take
you through the basics of utilizing the Datasets SDK. You'll learn to load and prepare
blockchain datasets for training with your preferred machine learning framework. These
tutorials cover loading various dataset configurations and splits, exploring your dataset's
contents, preprocessing, and sharing datasets with the community.
Basic knowledge of Python and familiarity with a machine learning framework like
PyTorch or TensorFlow is assumed. If you're already comfortable with these, you might
want to jump into the quickstart for a glimpse of what Giza Datasets SDK offers.
Remember, these guides focus on fundamental skills for using the Giza Datasets SDK .
There's a wealth of other functionalities and applications to explore beyond these guides.
Dataset Object
Before using the DatasetsHub , it's useful to first understand Datasets themselves.
Datasets in giza.datasets are represented as Dataset Class, which include details about a
dataset such as the dataset's name, description, link to its documentation, tags, etc. You
can query information about a given dataset with DatasetsHub
DatasetHub
The DatasetsHub class provides methods to manage and access datasets within the
Giza Datasets library. Before we delve deeper into various methods, lets import the
DatasetsHub and instantiate a DatasetsHub object.
Use the show() method to print a table of all datasets in the hub:
hub.show()
Use the list() method to get a list of all datasets in the hub:
datasets = hub.list()
print(datasets)
Use the get() method to get a Dataset object with a given name:
dataset = hub.get('tvl-fee-per-protocol')
Use the describe() method to print a table of details for a given dataset:
hub.describe('tvl-fee-per-protocol')
Use the list_tags() method to print a list of all tags in the hub.
hub.list_tags()
Use the get_by_tag() method to a list of Dataset objects with the given tag.
hub.get_by_tag('Liquidity')
DatasetsLoader
Locating reliable, easily reproducible datasets can often be a challenge. A key aim of the
Giza Datasets SDK is to simplify the process of accessing datasets of various formats and
types. The most straightforward way to start is to explore the Dataset Library or use the
DatasetsHub.
Assuming that we have already know the name of the dataset we want to load, we can
now use the DatasetLoader to load it.
By default, DatasetsLoader has the use_cache option enabled to improve the loading
performance of our datasets. If you want to disable it, add the following parameter when
initializing your class:
If you want to learn more about cache management, visit the Cache management
section.
os.environ['SSL_CERT_FILE'] = certifi.where()
Once we have our datasetsLoader class created and our certificates correct, we are
ready to load one of our datasets.
df = loader.load('yearn-individual-deposits')
df.head()
shape: (5, 7)
evt_block_numb token_contract_
evt_block_time vaults token_symbol
er address
datetime[ns
i64 str str str
]
Keep in mind that giza-datasets uses Polars (and not Pandas) as the underlying
DataFrame library.
In addition, if we have the option use_cache = True (default option), the load method
allows us to load our data in eager mode. With this mode, we will obtain several
advantages both in memory and time:
For more detailed information on the advantages and use of this mode, visit our Eager
mode section.
How to use it
loader = DatasetsLoader()
df = loader.load("tokens-daily-prices-mcap-volume", eager = True)
After using the collect() method, the result is loaded into memory. Before executing
the collect method, you can add as many operations as you want. Here is the result of the
above code snipet:
Throughout this tutorial, we'll explore the key functionalities Giza offers for cache
management.
How it works
The default cache directory is ~/.cache/giza_datasets
when you are creating your DatasetLoader object, you have the option to modify the
path of your cache:
or disable it:
and it would be done! With this simple configuration Giza takes care of downloading,
saving and uploading the necessary data efficiently.
Finally, if you want to clear the cache, you can run the following command:
Description
This dataset provides daily information on various DeFi protocols, encompassing data
from 35 protocols across 5 categories. The DataFrame includes fields such as chain ,
date , totalLiquidityUSD , fees , category , and project . The primary key for this
dataset is a combination of chain , date , and project . The categories covered in this
dataset are as follows:
Collection method
The information was obtained from Defillama API. Subsequently, a manual preprocessing
was performed to filter out the protocols based on their TVL and the availability of
sufficient historical data, not only from TVL, but also from other fields that can be found
in other datasets provided by Giza (TVL for each token by protocol, Top pools APY per
protocol and Tokens OHCL price).
Schema
chain : The blockchain network where the protocol is deployed. In some cases, this
feature not only specifies the blockchain network but also includes certain suffixes
like "staking" or "borrowed" for some protocols. These suffixes provide deeper
insights into the specific nature of the protocol's operations on that blockchain.
date : The date of the data snapshot, recorded daily.
totalLiquidityUSD : Total value locked in USD.
fees : Fees generated by the protocol.
category : The category of the protocol (e.g., Liquid Staking, Dexes).
project : The specific DeFi project or protocol name.
The primary key consists of a combination of chain , date , and project , ensuring
each row provides a unique snapshot of a project's daily performance.
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('tvl-fee-per-protocol')
df.head()
totalLiquidityUS
chain date fees category
D
"Liquid
"ethereum" 2020-12-20 2.6976e6 0
Staking"
"Liquid
"ethereum" 2020-12-21 1.2120e7 0
Staking"
"Liquid
"ethereum" 2020-12-21 1.1057e8 0
Staking"
"Liquid
"ethereum" 2020-12-21 1.2109e8 0
Staking"
"Liquid
"ethereum" 2020-12-21 2.2668e8 0
Staking"
Tokens OHLC price
Description
This dataset contains each 4 days historical price data for various cryptocurrencies,
sourced from the CoinGecko API. It includes data fields such as Open , High , Low ,
Close , and token The dataset is structured with date and token as its primary key,
enabling chronological analysis of cryptocurrency price movements. This dataset can be
merged with the Top pools APY per protocol dataset for combined financial analysis.
Collection method
Data is retrieved each 4 days from the CoinGecko API, which provides historical and
current price data for a range of cryptocurrencies. The API is utilized to gather opening,
highest, lowest, and closing prices.
Schema
date : The date for the price data.
Open : Opening price of the cryptocurrency on the given date.
High : Highest price of the cryptocurrency on the given date.
Low : Lowest price of the cryptocurrency on the given date.
Close : Closing price of the cryptocurrency on the given date.
token : Identifier of the cryptocurrency.
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('tokens-ohcl')
df.head()
Description
This dataset contains daily historical price, mcap and 24h volumes data for various
cryptocurrencies, sourced from the CoinGecko API. It includes data fields such as price
, market_cap , volumes_last_24h and token for each day. The dataset is structured
with date and token as its primary key, enabling chronological analysis of
cryptocurrency price movements. This dataset can be merged with the Top pools APY
per protocol dataset for combined financial analysis.
Collection method
Data is retrieved daily from the CoinGecko API, which provides historical and current
price data for a range of cryptocurrencies.
Schema
date : The date for the price data.
price : refers to the value of a cryptocurrency in USD at a given time, determined by
supply and demand on exchange markets.
market_cap : is the total market value of a cryptocurrency, calculated by multiplying
the current price of the cryptocurrency by its total circulating amount.
volumes_last_24h : represents the total amount of a cryptocurrency bought and
sold across all exchange platforms in the last 24 hours, reflecting the
cryptocurrency's liquidity and trading activity.
token : Identifier of the cryptocurrency.
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('tokens-daily-prices-mcap-volume')
df.tail()
volumes_last_24
date price market_cap token
h
Description
This dataset, sourced daily from Defillama, showcases detailed information about the top
20 pools by TVL (Total Value Locked) in the same set of protocols as covered in the "TVL
& fees protocol" dataset. It includes comprehensive data on date , tvlUsd , apy ,
apyBase , project , underlying_token , and chain . The primary key of this dataset
is composed of date , project , and chain , making it compatible for integration with
the TVL & fees protocol dataset for combined analyses.
Collection method
Data is gathered daily from Defillama, focusing on the highest TVL pools across various
DeFi projects. The collection involves identifying the top pools based on their TVL and
then compiling detailed information about their yield (APY), underlying tokens, and other
pertinent details.
Schema
date : The date of the data snapshot.
tvlUsd : Total value locked in USD.
apy : Annual Percentage Yield offered by the pool.
project : The name of the DeFi project or protocol.
underlying_token : The underlying tokens used in each pool.
chain : The blockchain network on which the pool operates.
The dataset uses a combination of date , project , and chain as its primary key.
Potential Use Cases
Yield Analysis: Understanding the APY trends and performance of top pools in
various DeFi protocols.
Market Research: Offering insights into the DeFi market's liquidity distribution and
yield opportunities for market researchers and analysts.
Investment Strategy Development: Assisting investors in identifying pools with
favorable yields and substantial liquidity for informed investment decision-making.
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('top-pools-apy-per-protocol')
df.tail()
underlying_toke
date tvlUsd apy project
n
"yearn-
2024-01-18 1520883 5.4928 "WETH"
finance"
"yearn-
2024-01-19 1534167 5.42961 "WETH"
finance"
"yearn-
2024-01-20 1524093 5.49112 "WETH"
finance"
"yearn-
2024-01-21 1491960 5.09787 "WETH"
finance"
"yearn-
2024-01-22 1440121 5.04939 "WETH"
finance"
TVL for each token by protocol
Description
This dataset provides a daily historical record of TVL (Total Value Locked) for each token
within various DeFi protocols, aligned with the protocols mentioned in the previous
datasets (TVL & fees per protocol, Top pools APY per protocol and Tokens OHCL price). It
is structured in partitions, with each partition representing a different protocol. The
columns within each partition are named after the tokens supported by that protocol,
and a token is included only if it has historical data dating back to at least August 1,
2022. The dataset includes a date column for the daily data entries and focuses on the
TVL of each token within a protocol, as opposed to the aggregate TVL of the entire
protocol.
Collection method
The data for this dataset is collected from Defillama, a reputable source for DeFi data.
The collection process involves:
Identifying protocols that have significant TVL and historical data presence.
Fetching daily TVL data for each token within these protocols.
Ensuring that tokens with data dating back to at least August 1, 2022, are included.
Organizing the data into partitions by protocol, with columns for each supported
token.
Regularly updating the dataset to reflect the latest TVL figures.
This methodical approach ensures that the dataset is comprehensive and provides a
granular view of the TVL across different tokens in the DeFi space.
Schema
partitions : Organized by protocol.
one_feature_per_supported_tokens : A different column for each of the tokens
supported by the protocol. The column name will be the name of the token. Each row
will represent the TVL of that protocol in that token.
date : Column indicating the date of each TVL entry.
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('tvl-per-project-tokens/project=lido')
df.tail()
Description
This dataset provides the aggregated daily borrows and deposits made to the Aave v2
Lending Pools. Only the pools in Ethereum L1 are taken into account and the
contract_address feature can be used as an unique identifier for the individual pools. The
dataset contains all the pool data from 25.01.2023 to 25.01.2024, and individual rows
are omitted if there were no borrows or deposits made in a given day.
Schema
day - date
symbol - token symbols of the lending pool
contract_address - contract address of the lending pool
deposits_volume - aggregated volume of all deposits made in that day, converted to
USD
borrows_volume - aggregated volume of all borrows made in that day, converted to
USD
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-daily-deposits-borrowsv2')
Daily Deposits & Borrows v3
Description
This dataset provides the aggregated daily borrows and deposits made to the Aave v3
Lending Pools. Only the pools in Ethereum L1 are taken into account and the
contract_address feature can be used as an unique identifier for the individual pools. The
dataset contains all the pool data from 25.01.2023 to 25.01.2024, and individual rows
are omitted if there were no borrows or deposits made in a given day.
Schema
day - date
symbol - token symbols of the lending pool
contract_address - contract address of the lending pool
deposits_volume - aggregated volume of all deposits made in that day, converted to
USD
borrows_volume - aggregated volume of all borrows made in that day, converted to
USD
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-daily-deposits-borrowsv3')
Daily Exchange Rates & Indexes v3
Description
This dataset provides the average borrowing rates (variable & stable), supply rate, and
liquidity indexes of Aave v2 Lending Pools. Only the pools in Ethereum L1 are considered,
and the contract_address feature can be used as a unique identifier for the individual
pools. The dataset contains all the pool data from 25.01.2023 to 25.01.2024, and
individual rows are omitted if there were borrows executed on the pool.
Schema
day - date
symbol - token symbols of the lending pool
contract_address - contract address of the lending pool
avg_stableBorrowRate - daily average of the stable borrow rate for a given token
avg_variableBorrowRate - daily average of the variable borrow rate for a given token
avg_supplyRate - daily average supply rate for the given pool
avg_liquidityIndex - interest cumulated by the reserve during the time interval since
the last updated timestamp
avg_variableBorrowIndex - variable borrow index of the aave pool
# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-daily-rates-indexes')
Liquidations v2
Description
This dataset contains all the individual liquidations of borrow positions in the Aave v2
Protocol. Only the liquidations in Ethereum L1 are shown and the dataset contains all the
liquidation data from inception to 05.02.2024.
For more information on liquidations in Aave Protocol, check out this resource:
https://docs.aave.com/faq/liquidations
Schema
day - date
liquidator - the contract address of the liquidater of the borrow position
user - the contract address of the owner of the borrow position
token_col - symbol of the token used for collateral
token_debt - symbol of the token used for debt
col_contract_address - contract address of the token used for collateral
collateral_amount - collateral amount, in tokens
col_value_USD - collateral amount, converted into USD using the avg USD-Token
price of the day of the liquidation
col_current_value_USD - collateral amount, converted into USD using the USD-Token
price at the time of the dataset creation (05.02.2024)
debt_contract_address - contract address of the token used for debt
debt_amount - debt amount, in tokens
debt_amount_USD - debt amount, converted into USD using the avg USD-Token price
of the day of the liquidation
debt_current_amount_USD - debt amount, converted into USD using the USD-Token
price at the time of the dataset creation (05.02.2024)
Potential Use Cases
liquidation prediction/forecasting
LTV optimization
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-liquidationsV2')
Liquidations v3
Description
This dataset contains all the individual liquidations of borrow positions in the Aave v3
Protocol. Only the liquidations in Ethereum L1 are shown and the dataset contains all the
liquidation data from inception to 05.02.2024.
For more information on liquidations in Aave Protocol, check out this resource:
https://docs.aave.com/faq/liquidations
Schema
day - date
liquidator - the contract address of the liquidater of the borrow position
user - the contract address of the owner of the borrow position
token_col - symbol of the token used for collateral
token_debt - symbol of the token used for debt
col_contract_address - contract address of the token used for collateral
collateral_amount - collateral amount, in tokens
col_value_USD - collateral amount, converted into USD using the avg USD-Token
price of the day of the liquidation
col_current_value_USD - collateral amount, converted into USD using the USD-Token
price at the time of the dataset creation (05.02.2024)
debt_contract_address - contract address of the token used for debt
debt_amount - debt amount, in tokens
debt_amount_USD - debt amount, converted into USD using the avg USD-Token price
of the day of the liquidation
debt_current_amount_USD - debt amount, converted into USD using the USD-Token
price at the time of the dataset creation (05.02.2024)
Potential Use Cases
liquidation prediction/forecasting
LTV optimization
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('aave-liquidationsV3')
Balancer
Daily Pool Liquidity
Description
This dataset provides the average daily available liquidity per token for the Balancer v1 &
v2 Liquidity Pools. Data from all the networks are taken into account and the pool_id
feature can be used as a unique identifier for the individual pools. The dataset contains
all the pool data from inception until 26.01.2024, and individual rows are omitted if there
were no trades executed in a given day.
Schema
day - date
pool_id - pool id of the balancer pool
blockchain - blockchain network of the given pool
pool_symbol - symbols of the token pairs (for weighted pools, its possible to have
more than 2 tokens exchanged in a pool)
token_symbol - symbol of the token with the given liquidity (for every day, each pool
has one row per token)
pool_liquidity - daily average amount of available tokens in the given pool
pool_liquidity_usd - daily average amount of available tokens in the given pool,
converted to USD values using average daily token-USD price of the given day
The dataset uses day , pool_id and token_symbol as the primary keys.
# Usage example:
loader = DatasetsLoader()
df = loader.load('balancer-daily-pool-liquidity')
Daily Swap Fees
Description
This dataset provides the daily average swap fees for the Balancer v1 & v2 Liquidity
Pools. Data from all the networks are taken into account and the contract_address
feature can be used as an unique identifier for the individual pools. The dataset contains
all the pool data from inception until 26.01.2024, and individual rows are omitted if there
were no trades executed in a given day.
Schema
day - date
contract_address - contract address of the balancer pool
blockchain - blockchain network of the given pool
token_pair - symbols of the token pairs exchanged (for weighted pools, its possible to
have more than 2 tokens exchanged in a pool)
avg_swap_fee - average swap fee for the given pool on the given day
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('balancer-daily-swap-fees')
Daily Trade Volume
Description
This dataset provides the aggregated, daily trading volumes for the Balancer v1 & v2
Liquidity Pools. ata from all the networks are taken into account and the pool_id feature
can be used as an unique identifier for the individual pools. The dataset contains all the
pool data from inception until 26.01.2024, and individual rows are omitted if there were
no trades executed in a given day.
Schema
day - date
pool_id - pool id of the balancer pool
blockchain - blockchain network of the given pool
token_pair - symbols of the token pairs exchanged (for weighted pools, its possible to
have more than 2 tokens exchanged in a pool)
trading_volume_usd - aggregated volume of token swaps executed on the given day,
converted into USD values in the time of the block execution
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('balancer-daily-trade-volume')
Curve
Daily Trade Volume & Fees
Description
This dataset provides the aggregate daily trade volume for all Curve DEX'es, as well as
daily fees & admin fees generated from these trades. Data is gathered for all DEX's from
their inception date, however individual rows are omitted if the trade volume is below 1
USD. For the pool_name column, the designated names do not always follow the
convention of "token A- token B", so it is advised to visit Curve's documentation to learn
more.
Schema
day - date
project_contract_address - contract address of the DEX
pool_name - name of the DEX (usually combination of the Tokens, but there are
exceptions with special names)
daily_volume_usd - daily trade volume in USD
admin_fee_usd - daily fees accrued by DEX, designated for the admins and
governance token holders
fee_usd - daily fees accrued by DEX, designated for LP providers
token_type - type of DEX (stable or volatile)
# Usage example:
loader = DatasetsLoader()
df = loader.load('curve-daily-volume-fees')
PancakeSwap
Daily Trade Volume
Description
This dataset provides the daily trade volume per pool for all PancakeSwap protocols .
Data from all the networks and versions are taken into account, from 26.09.2023 until
26.03.2024. Individual rows are omitted if there were no trades executed in a given day
above the threshold of 1 USD.
Schema
day - date
token_pair - symbols of the token pairs
project_contract_address - contract address of the DEX
version - PancakeSwap protocol version
blockchain - network of the DEX contract
daily_volume_usd - daily trade volume in USD
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('pancakeswap-daily-volume')
Uniswap
Uniswap V3 Liquidity Distribution
Description
This dataset contains the liquidity distribution of all Uniswap V3 pools on Ethereum
mainnet with TVL above $100k (as of the time of collection). Each file contains the
snapshots of liquidity distribution in 1000 block intervals as well as the current ticks at
the sampled blocks. Only 100 ticks above and below the current tick are stored. These
datasets can be used to reproduce the liquidity charts from the Uniswap analytics page.
Additionally, there is a standalone utility dataset with all of the pools' details (such as the
tokens within the pool, their decimals, or the fee charged by the pool).
Collection method
The initial list of top Uniswap V3 pools was fetched from the defillama API. Next, each
pool's details were fetched with onchain calls to the appropriate smart contracts. The
reconstruction of available liquidity over time was possible by listening to all the
historical mint and burn events emitted by the pools. Finally, the raw event data was
parsed into an easy-to-understand format.
Schema
Pools data
address: the address of the pool's smart contract
tick_spacing: tick spacing
token0: symbol of the first token in the pool
token1: symbol of the second token in the pool
decimals0: decimals of the first token in the pool
decimals1: decimals of the second token in the pool
fee: swap fee charged by the pool
token0_address: address of the first token
token1_address: address of the second token
chain: which chain the pool is deployed on (ethereum)
pool_type: what's the pool type (Uniswap V3)
Pool liquidity
block_number: block number of the snapshot
in_token: token to swap from
out_token: token to swap out to
price: price at a given tick (quoted in in_token/out_token)
amount: the amount of out_token available to be taken out of the tick
tick_id: tick id
Current ticks
current_tick: current tick of the pool
block_number: block number of the snapshot
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
liquidity_df = loader.load('uniswapv3-ethereum-usdc-weth-500-liquidity-sna
ticks_df = loader.load('uniswapv3-ethereum-usdc-weth-500-current-ticks')
pools_info_df = loader.load('uniswapv3-ethereum-pools-info')
liquidity_df.head()
ticks_df.head()
pools_info_df.head()
Compound
Compound V2 Interest Rates
Description
This dataset contains interest rates of all the markets on Compound V2 (ethereum
mainnet) since the protocol's inception. The interest rates are for both supplying and
borrowing. Additionally, users of this dataset can analyze the total supplied and borrowed
amounts in each market.
Collection method
The data was collected from the Compound V2 subgraph created by The Graph Protocol
(https://thegraph.com/hosted-service/subgraph/graphprotocol/compound-v2). The
queries were sent with a block parameter, corresponding to the current block at midnight
of each day within the dataset's timespan.
Schema
symbol - symbol of the receipt token (e.g. cETH)
totalBorrows - total amount of underlying tokens borrowed from the market
borrowRate - interest rate paid by the borrowers
totalSupply - total amount of receipt tokens issued by the market
supplyRate - interest rate paid by the suppliers
underlyingPriceUSD - USD value of the underlying token of a given market
exchangeRate - the exchange rate of receipt tokens (i.e cETH/ETH)
timestamp - unix timestamp of the snapshot
block_number - ethereum mainnet block number of the snapshot
totalSupplyUnderlying - total amount of underlying tokens supplied to the market
totalSupplyUSD - total USD value of the supplied tokens
totalBorrowUSD - total USD value of the tokens borrowed from the market
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('compound-daily-interest-rates')
Yearn
Individual Vault Deposits
Description
This dataset provides the individual token deposits made to the Yearn v2 Vaults. Only the
pools in Ethereum L1 are taken into account and the vaults feature can be used as an
unique identifier for the individual vaults. The dataset contains all the deposit data from
vault's inception. The value feature contains the token amount that has been correctly
divided by the decimal power of 10, to avoid overflows.
Schema
evt_block_time - datetime of the deposit execution
evt_block_number - block number of the deposit execution
vaults - contract address of the Yearn Vault
token_contract_address - contract address of the underlying token
token_symbol - symbol of the underlying token
token_decimals - decimals of the underlying token
value - deposit value in tokens (value is already decimalized using token_decimals)
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('yearn-individual-deposits')
Individual Vault Withdraws
Description
This dataset provides the individual token withdraws made to the Yearn v2 Vaults. Only
the pools in Ethereum L1 are taken into account and the vaults feature can be used as
an unique identifier for the individual vaults. The dataset contains all the deposit data
from vault's inception. The value feature contains the token amount that has been
correctly divided by the decimal power of 10, to avoid overflows.
Schema
evt_block_time - datetime of the deposit execution
evt_block_number - block number of the deposit execution
vaults - contract address of the Yearn Vault
token_contract_address - contract address of the underlying token
token_symbol - symbol of the underlying token
token_decimals - decimals of the underlying token
value - deposit value in tokens (value is already decimalized using token_decimals)
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('yearn-individual-withdraws')
Strategy Borrows
Description
This dataset provides the borrows made to the Yearn v2 Vaults by the associated
strategies. Only the pools in Ethereum L1 are taken into account and the vaults feature
can be used as an unique identifier for the individual vaults, while the
strategy_address feature can be used as an unique identifier for the strategies. The
dataset contains all the deposit data from vault's inception. The value feature contains
the token amount that has been correctly divided by the decimal power of 10, to avoid
overflows.
Schema
evt_block_time - datetime of the deposit execution
evt_block_number - block number of the deposit execution
vaults - contract address of the Yearn Vault
strategy_address - contract address of the strategy
token_contract_address - contract address of the underlying token
token_symbol - symbol of the underlying token
token_decimals - decimals of the underlying token
value - deposit value in tokens (value is already decimalized using token_decimals)
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('yearn-strategy-borrows')
Strategy Returns
Description
This dataset provides the deposits made to the Yearn v2 Vaults by the associated
strategies. Only the pools in Ethereum L1 are taken into account and the vaults feature
can be used as an unique identifier for the individual vaults, while the
strategy_address feature can be used as an unique identifier for the strategies. The
dataset contains all the deposit data from vault's inception. The value feature contains
the token amount that has been correctly divided by the decimal power of 10, to avoid
overflows.
Schema
evt_block_time - datetime of the deposit execution
evt_block_number - block number of the deposit execution
vaults - contract address of the Yearn Vault
strategy_address - contract address of the strategy
token_contract_address - contract address of the underlying token
token_symbol - symbol of the underlying token
token_decimals - decimals of the underlying token
value - deposit value in tokens (value is already decimalized using token_decimals)
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('yearn-strategy-returns')
Farcaster
Individual Reactions
Description
This dataset provides the individual reactions in the Farcaster Social Protocol that are
sent between 01.01.2024 and 04.01.2024. fid is the unique identifier in the Farcaster
protocol and can be used to connect users with their reactions. reaction_type is a
categorical variable, with 1 meaning the person liked the target cast, while 2 means
that the user reshared the cast within his/her audience.
Schema
id - id of the reaction
fid - fid of the reactor
created_at - timestamp of the reaction
target_fid - fid of the target cast
reaction_type - type of reaction, see the description
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('farcaster-reactions')
Individual Casts
Description
This dataset provides the individual casts in the Farcaster Social Protocol that are sent
between 01.01.2024 and 01.03.2024. fid is the unique identifier in the Farcaster
protocol and can be used to connect users with their casts.
Schema
created_at - timestamp of the link creation
id - id of the link creation
fid - fid of the link creator
hash - hash of the link execution
text - text content of the cast
embeds - embed content of the cast
mentions - mentioned content of the cast
parent_fid - fid of the parent cast
parent_url - url of the parent cast
parent_hash - hash of the parent cast
root_parent_url - url of the root cast
root_parent_hash - hash of the root cast
mentions_positions - positions of the mentions in the cast text
# Usage example:
loader = DatasetsLoader()
df = loader.load('farcaster-casts')
Individual Links
Description
This dataset provides the individual links in the Farcaster Social Protocol that are created
between 01.01.2024 and 01.02.2024. fid is the unique identifier in the Farcaster
protocol and can be used to connect users with their link creators and link targets.
Schema
created_at - timestamp of the link creation
id - id of the link creation
fid - fid of the link creator
hash - hash of the link execution
type - link type
target_fid - fid of the link target
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('farcaster-links')
Lens
Individual Posts
Description
This dataset provides the individual posts in the Lens Protocol that are sent between
01.01.2024 and 04.04.2024. It is important to note that because the content of a post
can be any multimedia, we have decided to not put the data directly, rather have a
column that contains the URI of the content.
Schema
call_block_time- timestamp of the post creation
profileID - id of the profile which created the post
contentURI - URI that contains the content of the post
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('lens-posts')
Individual Comments
Description
This dataset provides the individual comments in the Lens Protocol that are sent
between 01.01.2024 and 04.04.2024. It is important to note that because the content of
a comment can be any multimedia, we have decided to not put the data directly, rather
have a column that contains the URI of the content.
Schema
call_block_time- timestamp of the comment creation
profileID - id of the profile which created the comment
contentURI - URI that contains the content of the comment
pointed_profileID - id of the profile which created the pointed post
pointed_pubID - id of the pointed post
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('lens-comments')
Individual Mirrors
Description
This dataset provides the individual mirrors in the Lens Protocol that are sent between
01.01.2024 and 04.04.2024.
Schema
call_block_time- timestamp of the mirror creation
profileID - id of the profile which created the mirror
pointed_profileID - id of the profile which created the pointed post
pointed_pubID - id of the pointed post
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('lens-mirrors')
Individual Quotes
Description
This dataset provides the individual quotes in the Lens Protocol that are sent between
01.01.2024 and 04.04.2024. It is important to note that because the content of a quote
can be any multimedia, we have decided to not put the data directly, rather have a
column that contains the URI of the content.
Schema
call_block_time- timestamp of the quote creation
profileID - id of the profile which created the quote
contentURI - URI that contains the content of the quote
pointed_profileID - id of the profile which created the pointed post
pointed_pubID - id of the pointed post
Use example
from giza.datasets import DatasetsLoader
# Usage example:
loader = DatasetsLoader()
df = loader.load('lens-quotes')
Tools
Benchmark CLI
Benchmark CLI is a tool to run benchmarks on your Cairo projects though a single
command line.
When you run a benchmark on a Cairo program, it runs locally the entire Cairo stack:
Runner (CairoVM)
Prover (Platinum)
Verifier (Platinum)
Prerequisites
Rust
Platinum prover. As of February 2024, the tested revision is fed12d6 .
You can install the prover with the following command:
Installation
cargo install --git https://github.com/gizatechxyz/giza-benchmark.git
Usage
giza-benchmark -p <SIERRA_FILE> -i <PROGRAM_INPUT_FILE> -b <OUTPUT_DIRECTO
Example:
It's important to note that although the main goal is the transition from ML to ZKML, mcr
can be useful in other contexts, such as:
The model's weight needs to be minimal, for example for mobile applications.
Minimal inference times are required for low latency applications.
We want to check if we have created an overly complex model and a simpler one
would give us the same performance (or even better).
The number of steps required to perform the inference must be less than X (as is
currently constrained by the ZKML paradigm).
Installation
Install from PyPi
Serialization
To see in more detail how this tool works, check out this tutorial.
To run it:
serialize_model(YOUR_TRAINED_MODEL, "OUTPUT_PATH/MODEL_NAME.json")
This serialised model can then be sent directly to transpile using our platform. If the
model is too large, it will have to be reduced using MCR, but the structure is already
understandable by our transpiler.
MCR
To see in more detail how this tool works, check out this tutorial.
To see in more technical detail the algorithm behind this method, check out our paper.
To run it:
model, transformer = mcr(model = MY_MODEL,
X_train = X_train,
y_train = y_train,
X_eval = X_test,
y_eval = y_test,
eval_metric = 'rmse',
transform_features = True)
Supported models
Model status
XGBRegressor ✅
XGBClassifier ✅
LGBMRegressor ✅
LGBMClassifier ✅
Logistic Regression ⏳
GARCH ⏳
Tutorials
ZKML
In this collection of tutorials, you'll learn how to transpose models and make verifiable
inferences.
beginner nn
Verifiable XGBoost
In this tutorial you will learn how to use the Giza stack though a XGBoost model.
Installation
To follow this tutorial, you must first proceed with the following installation.
You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.
Now, your terminal session will use Python 3.11 for this project.
Install Giza
You'll find more options for installing Giza in the installation guide.
Install Dependencies
Setup
From your terminal, create a Giza user through our CLI in order to access the Giza
Platform:
Optional: you can create an API Key for your user in order to not regenerate your access
token every few hours.
data = load_diabetes()
X, y = data.data, data.target
>>>>
[giza][2024-05-10 17:14:48.565] No model id provided, checking if model ex
[giza][2024-05-10 17:14:48.567] Model name is: xgb_diabetes
[giza][2024-05-10 17:14:49.081] Model already exists, using existing model
[giza][2024-05-10 17:14:49.083] Model found with id -> 588! ✅
[giza][2024-05-10 17:14:49.777] Version Created with id -> 2! ✅
[giza][2024-05-10 17:14:49.780] Sending model for transpilation ✅
[giza][2024-05-10 17:15:00.670] Transpilation is fully compatible. Version
⠙ Transpiling Model...
[giza][2024-05-10 17:15:01.337] Downloading model ✅
[giza][2024-05-10 17:15:01.339] model saved at: xgb_diabetes
Now that our model is transpiled to Cairo we can deploy an endpoint to run verifiable
inferences. We will use Giza CLI again to run and deploy an endpoint. Ensure to replace
model-id and version-id with your ids provided during transpilation.
>>>>
▰▰▰▰▰▰▰ Creating endpoint!t!
[giza][2024-05-10 17:15:21.628] Endpoint is successful ✅
[giza][2024-05-10 17:15:21.635] Endpoint created with id -> 190 ✅
[giza][2024-05-10 17:15:21.636] Endpoint created with endpoint URL: https
def execution():
# The input data type should match the model's expected input
input = X_test[1, :]
if __name__ == "__main__":
data = load_diabetes()
X, y = data.data, data.target
If your problem is a binary classification problem, you will need to post-process the
result obtained after executing the predict method. The code you need to execute to get
the probability of class 1 (same probability returned by XGBClassifier.predict_proba()) is in
the following code snippet
import json
import math
def logit(x):
return math.log(x / (1 - x))
Parameters:
model_json_path (str): Path to the trained model in JSON format.
result (float): Result from GizaModel.predict().
Returns:
float: Probability of the positive class.
"""
with open(model_json_path, 'r') as f:
xg_json = json.load(f)
base_score = float(xg_json['learner']['learner_model_param']['base_sco
if base_score != 0:
result = result + logit(base_score)
final_score = 1 / (1 + math.exp(-result))
return final_score
# Usage example
model_path = 'PATH_TO_YOUR_MODEL.json' # Path to your model JSON file
predict_result = 3.45 # Example result from GizaModel.predict()
probability = post_process_binary_pred(model_path, predict_result)
Download the proof
For more detailed information on proving, please consult the Prove resource.
Initiating a verifiable inference sets off a proving job on our server, sparing you the
complexities of installing and configuring the prover yourself. Upon completion, you can
download your proof.
First, let's check the status of the proving job to ensure that it has been completed.
Remember to substitute endpoint-id and proof-id with the specific IDs assigned to
you throughout this tutorial.
>>>
[giza][2024-03-19 11:51:45.470] Getting proof from endpoint 190 ✅
{
"id": 664,
"job_id": 831,
"metrics": {
"proving_time": 15.083126
},
"created_date": "2024-03-19T10:41:11.120310"
}
>>>>
[giza][2024-03-19 11:55:49.713] Getting proof from endpoint 190 ✅
[giza][2024-03-19 11:55:50.493] Proof downloaded to zk_xgboost.proof ✅
Better to surround the proof-id in double quotes (") when using the alphanumerical id
Verify the proof
Finally, you can verify the proof.
>>>>
[giza][2024-05-21 10:08:59.315] Verifying proof...
[giza][2024-05-21 10:09:00.268] Verification result: True
[giza][2024-05-21 10:09:00.270] Verification time: 0.437505093
Verifiable Linear Regression
In this tutorial you will learn how to use the Giza stack though a Linear Regression model.
Installation
To follow this tutorial, you must first proceed with the following installation.
You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.
Now, your terminal session will use Python 3.11 for this project.
Install Giza
You'll find more options for installing Giza in the installation guide.
Install Dependencies
Setup
From your terminal, create a Giza user through our CLI in order to access the Giza
Platform:
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
def execution():
# The input data type should match the model's expected input
input = np.array([[5.5]]).astype(np.float32)
print(
f"Predicted value for input {input.flatten()[0]} is {result[0].fla
execution()
Initiating a verifiable inference sets off a proving job on our server, sparing you the
complexities of installing and configuring the prover yourself. Upon completion, you can
download your proof.
First, let's check the status of the proving job to ensure that it has been completed.
Remember to substitute endpoint-id and proof-id with the specific IDs assigned to
you throughout this tutorial.
>>>
[giza][2024-03-19 11:51:45.470] Getting proof from endpoint 109 ✅
{
"id": 664,
"job_id": 831,
"metrics": {
"proving_time": 15.083126
},
"created_date": "2024-03-19T10:41:11.120310"
}
>>>>
[giza][2024-03-19 11:55:49.713] Getting proof from endpoint 109 ✅
[giza][2024-03-19 11:55:50.493] Proof downloaded to zklr.proof ✅
Better to surround the proof-id in double quotes (") when using the alphanumerical id
>>>>
[giza][2024-05-21 10:08:59.315] Verifying proof...
[giza][2024-05-21 10:09:00.268] Verification result: True
[giza][2024-05-21 10:09:00.270] Verification time: 0.437505093
Verifiable MNIST Neural Network
Giza provides developers with the tools to easily create and expand Verifiable Machine
Learning solutions, transforming their Python scripts and ML models into robust,
repeatable workflows.
In this tutorial, we will explore the process of building your first Neural Network using
MNIST dataset, Pytorch, and Giza SDK and demonstrating its verifiability.
You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.
Now, your terminal session will use Python 3.11 for this project.
Install Giza
Install Giza CLI
Install the CLI from PyPi:
You'll find more options for installing Giza in the installation guide.
Install Dependencies
Setup
From your terminal, create a Giza user through our CLI in order to access the Giza
Platform:
Optional: you can create an API Key for your user in order to not regenerate your access
token every few hours.
giza users create-api-key
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import numpy as np
import logging
from scipy.ndimage import zoom
from torch.utils.data import DataLoader, TensorDataset
def resize_images(images):
return np.array([zoom(image[0], (0.5, 0.5)) for image in images])
def prepare_datasets():
print("Prepare dataset...")
train_dataset = torchvision.datasets.MNIST(root='./data', train=True,
test_dataset = torchvision.datasets.MNIST(root='./data', train=False)
x_train = resize_images(train_dataset)
x_test = resize_images(test_dataset)
def train_model(train_loader):
print("Train model...")
outputs = model(images)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0:
print(f'Epoch [{epoch + 1}/{num_epochs}], Step [{i + 1}/{l
def execution():
# Prepare training and testing datasets
x_train, y_train, x_test, y_test = prepare_datasets()
model = train_model(train_loader)
test_model(model, test_loader)
return model
model = execution()
ONNX, short for Open Neural Network Exchange, is an open format for representing and
exchanging machine learning models between different frameworks and libraries. It
serves as an intermediary format that allows you to move models seamlessly between
various platforms and tools, facilitating interoperability and flexibility in the machine
learning ecosystem.
import torch.onnx
onnx_file_path = "mnist_model.onnx"
convert_to_onnx(model, onnx_file_path)
Now that your model is converted to ONNX format, use the Giza-CLI to transpile it to
Orion Cairo code.
> giza transpile mnist_model.onnx --output-path verifiable_mnist
>>>
[giza][2024-02-07 16:31:20.844] No model id provided, checking if model ex
[giza][2024-02-07 16:31:20.845] Model name is: mnist_model
[giza][2024-02-07 16:31:21.599] Model Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.436] Version Created with id -> 1! ✅
[giza][2024-02-07 16:31:22.437] Sending model for transpilation ✅
[giza][2024-02-07 16:32:13.511] Transpilation is fully compatible. Version
[giza][2024-02-07 16:32:13.516] Transpilation recieved! ✅
[giza][2024-02-07 16:32:14.349] Transpilation saved at: verifiable_mnist
Thanks to full support for all operators used by MNIST model in the transpiler, your
transpilation process is completely compatible. This ensures that your project compiles
smoothly and has already been compiled behind the scenes on our platform.
If your model incorporates operators that aren't supported by the transpiler, you may need
to refine your Cairo project to ensure successful compilation. For more details, refer to the to
Transpiler resource.
With your model successfully transposed, it's now ready for deployment of an inference
endpoint. Our deployment process sets up services that handle prediction requests via a
designated endpoint, using Cairo to ensure inference provability.
Deploy your service, which will be ready to accept prediction requests at the
/cairo_run endpoint, by using the following command:
When you initiate a prediction using Giza it executes the Cairo program using CairoVM,
generating trace and memory files for the proving. It also returns the output value and
initiates a proving job to generate a Stark proof of the inference.
def preprocess_image(image_path):
from PIL import Image
import numpy as np
def execution():
image = preprocess_image("./imgs/zero.png")
(result, request_id) = prediction(image, MODEL_ID, VERSION_ID)
print("Result: ", result)
print("Request id: ", request_id)
execution()
🚀 Starting deserialization process...
✅ Deserialization completed! 🎉
(0, '"3a15bca06d1f4788b36c1c54fa71ba07"')
Executing a verifiable inference sets off a proving job on our server, sparing you the
complexities of installing and configuring the prover yourself. Upon completion, you can
download your proof.
First, let's check the status of the proving job to ensure that it has been completed.
Remember to substitute endpoint-id and proof-id with the specific IDs assigned to
you throughout this tutorial.
>>>
[giza][2024-03-19 11:51:45.470] Getting proof from endpoint 109 ✅
{
"id": 664,
"job_id": 831,
"metrics": {
"proving_time": 15.083126
},
"created_date": "2024-03-19T10:41:11.120310"
}
>>>>
[giza][2024-03-19 11:55:49.713] Getting proof from endpoint 1 ✅
[giza][2024-03-19 11:55:50.493] Proof downloaded to zk_mnist.proof ✅
Voilà 🎉🎉
You've learned how to use the entire Giza stack, from training your model to transpiling it
to Cairo for verifiable execution. We hope you've enjoyed this journey!
Better to surround the proof-id in double quotes (") when using the alphanumerical id
>>>>
[giza][2024-05-21 10:08:59.315] Verifying proof...
[giza][2024-05-21 10:09:00.268] Verification result: True
[giza][2024-05-21 10:09:00.270] Verification time: 0.437505093
AI Agents
In this collection of tutorials, you'll learn how to use AI Agents to make your smart
contracts smarter.
advanced agent
Pendle Trading Agent
This project uses poetry as the dependency manager, to install the required
dependencies simply execute:
poetry install
An active Giza account is required to deploy the model and use agents. If you don't
have one, you can create one here.
You also need an ape account to use Giza Agents, you can read how to create an Ape
account, as well as the basics of the Ape Framework here
To run a forked Ethereum network locally, you need to install Foundry.
Finally, we will need some environment variables. Create a .env file in the directory of
this project and populate it one variable:
DEV_PASSPHRASE="<YOUR-APE-ACCOUNT-PASSWORD>"
Checklist:
This creates a local Ethereum mainnet network (chain-id 1) from the 19754466'th block,
which we will use to run our agent on. Working on a local fork provides various
advantages, such as being able to experiment with any smart contract and protocol that
is on the target network.
Before running the setup.py, make sure to edit the marked lines to include your Ape
password and your Ape username
python agent/setup.py
This command will login to your Ape wallet, and mint you the required amount of weETH
to be able to run the agent.
6. Pendle Agent
Before running the agent, lets look at some code snippets to understand what the code is
actually doing.
if __name__ == "__main__":
# Create the parser
parser = argparse.ArgumentParser()
# Add arguments
parser.add_argument("--agent-id", metavar="A", type=int, help="model-
parser.add_argument("--weETH-amount", metavar="W", type=float, help="w
parser.add_argument("--fixed-yield", metavar="Y", type=float, help="f
parser.add_argument("--expiration-days", metavar="E", type=int, help=
# Parse arguments
args = parser.parse_args()
agent_id = args.agent_id
weETH_amount = args.weETH_amount
fixed_yield = args.fixed_yield
expiration_days = args.expiration_days
SY_PY_swap(weETH_amount,agent_id,fixed_yield, expiration_days)
To run the agent, we need to type as arguments the id of the agent (in our case : 5), the
weETH-amount that we locate to the wallet (lets give 5 weETH), the fixed yield and the
expiration date of the pool (these two can be parsed from the pools in a later iteration).
The main function simply parses the arguments, and runs the main function of the agent,
SY_PY_swap().
contracts = {
"router": router,
"routerProxy": routerProxy,
"SY_weETH_Market": SY_weETH_Market,
"weETH": weETH,
"PT_weETH": PT_weETH,
}
agent = create_agent(
agent_id=agent_id,
chain=chain,
contracts=contracts,
account_alias=account,
)
We are putting all the contracts that our agents will interact in a dictionary, and using
that in agent creation. Giza Agents automatically creates the Contracts() objects you
might be familiar from the Ape Framework.
decimals = contracts.weETH.decimals()
weETH_amount = weETH_amount * 10**decimals
state = contracts.SY_weETH_Market.readState(contracts.router.addre
contracts.weETH.approve(contracts.routerProxy.address, traded_SY_a
contracts.routerProxy.swapExactTokenForPt(wallet_address, contract
,input_tuple(contracts.weETH, traded_SY
PT_balance = contracts.PT_weETH.balanceOf(wallet_address)
weETH_balance = contracts.weETH.balanceOf(wallet_address)
You can change the variables to see how it affects the trade at the end.
15:53:46.145 | INFO | Created flow run 'imperial-sloth' for flow 'SY-PY
15:53:46.145 | INFO | Action run 'imperial-sloth' - View at https://act
15:53:46.765 | INFO | Action run 'imperial-sloth' - Created task run 'C
15:53:46.765 | INFO | Action run 'imperial-sloth' - Executing 'Create a
15:53:48.583 | INFO | Task run 'Create a Giza agent using Agent_ID-0'
15:53:48.673 | INFO | Action run 'imperial-sloth' - Created task run 'R
15:53:48.673 | INFO | Action run 'imperial-sloth' - Executing 'Run the
🚀 Starting deserialization process...
✅ Deserialization completed! 🎉
15:53:53.102 | INFO | Task run 'Run the yield prediction model-0' - Fin
15:53:53.104 | INFO | Action run 'imperial-sloth' - Result: AgentResult
0.02796276, 1.01916988]])}, request_id=4e8cc39af7a443a0af66b0426cb
INFO: Connecting to existing Erigon node at https://ethereum-rpc.publicnod
15:53:53.668 | INFO | Connecting to existing Erigon node at https://ethere
WARNING: Danger! This account will now sign any transaction it's given.
15:53:54.638 | WARNING | Danger! This account will now sign any transactio
15:53:54.649 | INFO | Action run 'imperial-sloth' - Verification comple
15:53:56.089 | INFO | Action run 'imperial-sloth' - Calculated Price: 1
15:56:41.745 | INFO | Action run 'imperial-sloth' - The amount of SY to
WARNING: Using cached key for pendle-agent
15:56:41.765 | WARNING | Using cached key for pendle-agent
INFO: Confirmed 0xfaa27a6370eacd3123baa95fb5f9597a8385897be935e818d719dbc8
15:56:42.123 | INFO | Confirmed 0xfaa27a6370eacd3123baa95fb5f9597a8385897b
INFO: Confirmed 0xfaa27a6370eacd3123baa95fb5f9597a8385897be935e818d719dbc8
15:56:42.130 | INFO | Confirmed 0xfaa27a6370eacd3123baa95fb5f9597a8385897b
WARNING: Using cached key for pendle-agent
15:56:42.457 | WARNING | Using cached key for pendle-agent
INFO: Confirmed 0xb37d84466093365255b4dd3e6a90bbb4b0fd5f38a5b8ee19449f385f
15:56:46.165 | INFO | Confirmed 0xb37d84466093365255b4dd3e6a90bbb4b0fd5f38
INFO: Confirmed 0xb37d84466093365255b4dd3e6a90bbb4b0fd5f38a5b8ee19449f385f
15:56:46.174 | INFO | Confirmed 0xb37d84466093365255b4dd3e6a90bbb4b0fd5f38
15:56:46.186 | INFO | Action run 'imperial-sloth' - Swap succesfull! Cu
15:56:46.398 | INFO | Action run 'imperial-sloth' - Finished in state C
Congrats, you have just used an Giza Agent to provide trade on Pendle Protocol using a
ZKML yield prediction model.
Create an AI Agent to
Mint an MNIST NFT
Agents are entities designed to assist users in interacting with Smart Contracts by
managing the proof verification of verifiable ML models and executing these contracts
using Ape's framework.
Agents serve as intermediaries between users and Smart Contracts, facilitating seamless
interaction with verifiable ML models and executing associated contracts. They handle
the verification of proofs, ensuring the integrity and authenticity of data used in contract
execution.
In this tutorial, we will create an AI Agent to mint an MNIST NFT. Using an existing MNIST
endpoint, we will perfom a prediction on the MNIST dataset and mint an NFT based on
the prediction.
This tutorial can be thought as the continuation of the previous tutorial Verifiable MNIST
Neural Network.
Installation
To follow this tutorial, you must first proceed with the following installation.
You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.
Now, your terminal session will use Python 3.11 for this project.
Install Giza
You'll find more options for installing Giza in the installation guide.
Install Dependencies
Setup
From your terminal, create a Giza user through our CLI in order to access the Giza
Platform:
Create an API Key for your user in order to not regenerate your access token every few
hours.
This will create a new account under $HOME/.ape/accounts using the keyfile structure
from the eth-keyfile library , for more information on the account management, you can
refer to the Ape's framework documentation.
We encourage the creation of a new account for each agent, as it will allow you to
manage the agent's permissions and access control more effectively, but importing
accounts is also possible.
In this case we will use Sepolia testnet, you can get some testnet ETH from a faucet like
Alchemy Sepolia Faucet or LearnWeb3 Faucet. This faucets will ask for security
meassures to make sure that you are not a bot like having a specific amount of ETH in
mainnet or a Github account. There are many faucets available, you can choose the one
that suits you the best.
Once, we can recieve the testnet ETH and we have a funded account, we can proceed to
create an AI Agent.
Creating an AI Agent
Now that we have a funded account, we can create an AI Agent. We can do this by running
the following command:
This command will prompt you to choose the account you want to use with the agent,
once you select the account, the agent will be created and you will receive the agent id.
The output will look like this:
This will create an AI Agent that can be used to interact with the deployed MNIST model.
How to use the AI Agent
Now that the agent is created we can start using it through the Agent SDK. As we will be
using the agent to mint a MNIST NFT, we will need to provide the MNIST image to the
agent and preprocess it before sending it to the model.
import pprint
import numpy as np
from PIL import Image
Now we will start creating the functions that are necessary to create a program that will:
To load and preprocess the image we can use the function that we already created in the
previous tutorial Verifiable MNIST Neural Network.
# This function is for the previous MNIST tutorial
def preprocess_image(image_path: str):
"""
Preprocess an image for the MNIST model.
Args:
image_path (str): Path to the image file.
Returns:
np.ndarray: Preprocessed image.
"""
Now, we will create a function to create an instance of the agent. The agent is an
extension of the GizaModel class, so we can execute the predict as if we were using a
model, but the agents needs more information:
chain: The chain where the contract and account are deployed
contracts: This is a dictionary in the form of
{"contract_alias": "contract_address"} that contains the contract alias and
address.
This contract_alias is the alias that we will use when executing the contract through
code.
def create_agent(model_id: int, version_id: int, chain: str, contract: str
"""
Create a Giza agent for the MNIST model with MNIST
"""
agent = GizaAgent(
contracts={"mnist": contract},
id=model_id,
version_id=version_id,
chain=chain,
account="my_account1"
)
return agent
This new task will execute the predict method of the agent following the same
format as a GizaModel instance. But in this case, the agent will return an AgentResult
object that contains the prediction result, request id and multiple utilities to handle the
verification of the prediction.
Args:
image (np.ndarray): Image to predict.
Returns:
int: Predicted digit.
"""
prediction = agent.predict(
input_feed={"image": image}, verifiable=True
)
return prediction
Once we have the result, we need to access it. The AgentResult object contains the
value attribute that contains the prediction result. In this case, the prediction result is a
number from 0 to 9 that represents the digit that the model predicted.
When we access the value, the execution will be blocked until the proof of the prediction
has been created and once created it is verified. If the proof is not valid, the execution
will raise an exception.
This task is more for didactic purposes, to showcase in the workspaces the time that it
can take to verify the proof. In a real scenario, you can use the AgentResult value
directly in any other task.
Args:
prediction (dict): Prediction from the model.
Returns:
int: Predicted digit.
"""
# This will block the executon until the prediction has generated the
return int(prediction.value[0].argmax())
Finally, we will create a task to mint an NFT based on the prediction result. This task will
use the prediction result to mint an NFT with the image of the MNIST digit predicted.
To execute the contract we have the GizaAgent.execute context that will yield all the
initialized contracts and the agent instance. This context will be used to execute the
contract as it handles all the connections to the nodes, networks, and contracts.
In our instantiation of the agent we added an mnist alias to access the contract to ease
its use. For example:
# A context is created that yields all the contracts used in the agent
with agent.execute() as contracts:
# This is how we access the contract
contracs.mnist
# This is how we access the functions of the contract
contracts.mnsit.mint(...)
def execute_contract(agent: GizaAgent, digit: int):
"""
Execute the MNIST contract with the predicted digit to mint a new NFT
Args:
agent (GizaAgent): Giza agent.
digit (int): Predicted digit.
Returns:
str: Transaction hash.
"""
with agent.execute() as contracts:
contract_result = contracts.mnist.mint(digit)
return contract_result
Now that we have all the steps defined, we can create the function to execute.
def mint_nft_with_prediction():
# Preprocess image
image = preprocess_image("seven.png")
# Create Giza agent
agent = create_agent(MODEL_ID, VERSION_ID, CHAIN, MNIST_CONTRACT)
# Predict digit
prediction = predict(agent, image)
# Get digit
digit = get_digit(prediction)
# Execute contract
result = execute_contract(agent, digit)
pprint.pprint(result)
Remember that we should have kept the passphrase in a safe place? Now it is time to use
it. For learning purposes, we will use the passphrase in the code, but in a real scenario,
you should keep it safe and not hardcode it in the code.
mint_nft_with_prediction()
Using Ape's default networks (chains) relies on public RPC nodes that could hit request
limits and make the execution of the contract fail. For a better experience think about using
a private RPC node with higher quotas
We learned how to load and preprocess the MNIST image, create an instance of the
agent, predict the MNIST image using the agent, access the prediction result, and mint an
NFT based on the prediction result.
Using Arbitrum with AI Agents
How to interact with Arbitrum using AI Agents
In the previous tutorial we have seen how to use the GizaAgent class to create a simple
agent that can interact with the Ethereum blockchain. In this tutorial we will see how to
use other chains with the GizaAgent class.
As we rely on ape to interact with the blockchain, we can use any chain that ape
supports. The list of supported chains via plugins can be found here.
In this tutorial we will use Arbitrum as an example. The Arbitrum is a layer 2 solution
for Ethereum that provides low cost and fast transactions. The Arbitrum is supported
by ape and we can use it with the GizaAgent class.
Installation
To follow this tutorial, you must first proceed with the following installation.
You should install Giza tools in a virtual environment. If you’re unfamiliar with
Python virtual environments, take a look at this guide. A virtual environment
makes it easier to manage different projects and avoid compatibility issues
between dependencies.
Now, your terminal session will use Python 3.11 for this project.
Install Giza
You'll find more options for installing Giza in the installation guide.
Install Dependencies
Create an API Key for your user in order to not regenerate your access token every few
hours.
We encourage the creation of a new account for each agent, as it will allow you to manage
the agent's permissions and access control more effectively, but importing accounts is also
possible.
Create an Agent
Now that we have a funded account, we can create an AI Agent. We can do this by running
the following command:
This command will prompt you to choose the account you want to use with the agent,
once you select the account, the agent will be created and you will receive the agent id.
The output will look like this:
[giza][2024-04-10 11:50:24.005] Creating agent ✅
[giza][2024-04-10 11:50:24.006] Using endpoint id to create agent, retriev
[giza][2024-04-10 11:50:53.480] Select an existing account to create the a
[giza][2024-04-10 11:50:53.480] Available accounts are:
┏━━━━━━━━━━━━━┓
┃ Accounts ┃
┡━━━━━━━━━━━━━┩
│ my_account │
└─────────────┘
Enter the account name: my_account
{
"id": 1,
"name": <agent_name>,
"description": <agent_description>,
"parameters": {
"model_id": <model_id>,
"version_id": <version_id>,
"endpoint_id": <endpoint_id>,
"alias": "my_account"
},
"created_date": "2024-04-10T09:51:04.226448",
"last_update": "2024-04-10T09:51:04.226448"
}
This will create an AI Agent that can be used to interact with the deployed MNIST model.
We can confirm if it has been installed by executing ape networks list in the terminal.
!ape networks list
arbitrum
├── goerli
│ ├── alchemy
│ └── geth (default)
├── local (default)
│ └── test (default)
├── mainnet
│ ├── alchemy
│ └── geth (default)
└── sepolia
├── alchemy
└── geth (default)
ethereum (default)
├── goerli
│ ├── alchemy
│ └── geth (default)
├── local (default)
│ ├── geth
│ └── test (default)
├── mainnet
│ ├── alchemy
│ └── geth (default)
└── sepolia
├── alchemy
└── geth (default)
Here we can see that we have multiple networks available, including arbitrum . So now
we can use it when instantiating the GizaAgent class.
For this execution, we will use Arbitrum mainnet and use a private RPC node, this is
because public nodes have small quotas that can easily be reached.
The contract is a verified contract selected at random from arbiscan, the mission is to
showcase that we can read properties from this contract, which means that we could also
be able to execute a write function.
In this case, as we are only executing a read function we don't need a funded wallet to do
anything as we won't sign any transactions.
Remember that we will need to specify the <Account>_PASSPHRASE if you are launching
your operation as a script, exporting it will be enough:
Remember that we will need to specify the <Account>_PASSPHRASE if you are launching
your operation as a script, exporting it will be enough:
export <Account>_PASSPHRASE=your-passphrase
If you are using it from a notebook you will need to launch the notebook instance from
an environment with the passphrase variable or set it in the code prior to importing
giza_actions :
import os
os.environ["<Account>_PASSPHRASE"] = "your-passphrase"
import os
os.environ["<Account>_PASSPHRASE"] = ...
MODEL_ID = ...
VERSION_ID = ...
ACCOUNT = ...
PRIVATE_RPC = ... # This can also be loaded from the environment or a .env
agent = GizaAgent(
id=MODEL_ID,
version_id=VERSION_ID,
chain=f"arbitrum:mainnet:{PRIVATE_RPC}",
account=ACCOUNT,
contracts={
"random": "0x8606d62fD47473Fad2db53Ce7b2B820FdEab7AAF"
}
)
```
Now that we have the agent instance we can enter the execute() context and call the
read function from an Arbitrum smart contract:
with agent.execute() as contracts:
result = contracts.random.name()
print(f"Contract name is: {result}")
Now these same steps can be followed to use any other network supported by ape and
interact with different chains.
Uniswap V3 LP
Rebalancing with AI Agent
This tutorial shows how to use Giza Agents to automatically rebalance a Uniswap V3 LP
position with a verifiable volatility prediction Machine Learning model.
Introduction
Welcome to this step-by-step tutorial on leveraging Zero-Knowledge Machine Learning
(ZKML) for volatility prediction to manage a liquidity position on Uniswap V3. In this
guide, we will walk through the entire process required to set up, deploy, and maintain an
intelligent liquidity management solution using Giza stack. By the end of this tutorial,
you will have a functional system capable of optimizing your liquidity contributions based
on predictive market analysis.
The primary goal here is to use machine learning predictions to adjust your liquidity
position dynamically in response to anticipated market volatility, thereby maximizing
your potential returns and minimizing risks. This project combines concepts from
decentralized finance (DeFi), machine learning, and blockchain privacy enhancements
(specifically Zero-Knowledge Proofs) to create a liquidity providing strategy for Uniswap
V3, one of the most popular decentralized exchanges.
Note: This project constitutes a Proof of Concept and should not be deployed on a
mainnet, since it's likely to underperform. When developing a production-grade liquidity
management system, you should consider multiple factors, such as gas fees or
impermanent loss. For an intorduction into the art of liquidity management on Uniswap
V3, you can refer to the following articles:
1. Uniswap V3 Overview
Uniswap V3 is a third version of the Uniswap decentralized exchange. It allows users to
swap between any pair of ERC-20 tokens in a permissionless way. Additionally, it allows
the liquidity providers (LPs) to choose the custom price ranges in order to maximize the
capital efficiency.
In a nutshell, LPs earn a fee on each swap performed within a given pool. When LPs want
to provide the liquidity, they choose a price range (formally, represented in the form of
ticks), within which their deployed capital can be used by other users to perform their
swaps. The tighter these bounds are chosen, the greater the return to the LP can be.
However, if the price deviates outside of those bounds, the LP no longer earns any fees.
Because of that, choosing the price range is of utmost importance for a liquidity provider.
One of the metrics that an LP can use to decide on the most optimal liquidity bounds is
the price volatility of a given pair. If the LP expects a high volatility to occur, they would
deploy their liquidity in a wider range to prevent from the price deviating outside of their
bounds, thus losing them money. Conversely, if the LP expects a low volatility, they would
deploy a tighter-bound position in orderd to increase the capital efficiency and as a
result generate more yield.
In this project we will use the volatility prediction to adjust the width of the LP liquidity
bounds.
2. Setting up Your Development Environment
Python 3.11 must be installed on your machine
giza-sdk should be installed to use giza cli and giza agents. You can install it by
running pip install giza-sdk
You must have an active Giza account. If you don't have one, you can create one here.
Depending on the framework you want to use to develop a volatility prediction model,
you might need to install some external libraries. In this example, we are using torch,
scikit-learn, and pandas. You can install them by running
pip install -U scikit-learn torch pandas
You will also need a funded EOA ethereum address linked to an ape account. You can
follow the creating an account and funding the account parts of our MNIST tutorial to
complete these steps.
Once you have a funded development address, you need to get the tokens you want
to provide the liquidity for. In this example we are using the UNI-WETH pool with 0.3%
fee on ethereum sepolia. You can approve and mint WETH, and swap some of it for
UNI with the get_tokens.py script. Simply execute python get_tokens.py and
follow the prompts in the console. The script will mint 0.0001 WETH and swap half of
it for UNI.
Finally, we will need some environment variables. Create a .env file in the directory of
this project and populate it with these 2 variables:
DEV_PASSPHRASE="<YOUR-APE-ACCOUNT-PASSWORD>"
SEPOLIA_RPC_URL="YOUR-RPC-URL"
We recommend using private RPCs but if you don't have one and want to use a public
one, you can use https://eth-sepolia.g.alchemy.com/v2/demo
After logging in, we need to transpile our ONNX model representation into Cairo. To
achieve this, simply execute
giza transpile --model-id <YOUR-MODEL-ID> --framework CAIRO <PATH-TO-YOU-
ONNX-MODEL> --output-path <YOUR-OUTPUT-PATH>
.
Finally, we can deploy an endpoint that we will use to run our inferences. Execute it with
giza endpoints deploy --model-id <YOUR-MODEL-ID> --version-id <YOUR-
VERSION-ID>
.
This command will prompt you to choose the account you want to use with the agent,
once you select the account, the agent will be created and you will receive the agent id.
The output will look like this:
[giza][2024-04-10 11:50:24.005] Creating agent ✅
[giza][2024-04-10 11:50:24.006] Using endpoint id to create agent, retriev
[giza][2024-04-10 11:50:53.480] Select an existing account to create the a
[giza][2024-04-10 11:50:53.480] Available accounts are:
┏━━━━━━━━━━━━━┓
┃ Accounts ┃
┡━━━━━━━━━━━━━┩
│ my_account │
└─────────────┘
Enter the account name: my_account
{
"id": 1,
"name": <agent_name>,
"description": <agent_description>,
"parameters": {
"model_id": <model_id>,
"version_id": <version_id>,
"endpoint_id": <endpoint_id>,
"alias": "my_account"
},
"created_date": "2024-04-10T09:51:04.226448",
"last_update": "2024-04-10T09:51:04.226448"
}
This will create an AI Agent that can be used to interact with the Uniswap smart
contracts.
In order to remove the liquidity from an existing position, we need to get the liquidity
amount locked inside of it. Using that value, we can remove all of it from our nft position
and collect all the tokens with all the fees accrued:
def get_pos_liquidity(nft_manager, nft_id):
(
nonce,
operator,
token0,
token1,
fee,
tickLower,
tickUpper,
liquidity,
feeGrowthInside0LastX128,
feeGrowthInside1LastX128,
tokensOwed0,
tokensOwed1,
) = nft_manager.positions(nft_id)
return liquidity
Before minting a new position, we need to know what is the current pool price. On
Uniswap V3, this value is represented as a tick. If you need a refresher on Uniswap V3
maths, we recommend this article. Next, using the result of our volatility model, we want
to calculate the lower and upper tick of the new liquidity position. Finally, we will prepare
the parameters for the minting transaction. The source code for the helper functions
used can be found here
_, curr_tick, _, _, _, _, _ = contracts.pool.slot0()
tokenA_decimals = contracts.tokenA.decimals()
tokenB_decimals = contracts.tokenB.decimals()
lower_tick, upper_tick = get_tick_range(
curr_tick, predicted_value, tokenA_decimals, tokenB_decimals, pool_fee
)
mint_params = get_mint_params(
user_address, tokenA_amount, tokenB_amount, pool_fee, lower_tick, uppe
)
Finally, we can mint a new position by calling the mint functon on the NFTManager smart
contract
contract_result = contracts.nft_manager.mint(mint_params)
All the code can be found in this script. An example of how to define a prediction task,
create an AI Agent, and mint an NFT representing an LP position:
from giza.agents import GizaAgent
Args:
X (np.ndarray): Input to the model.
Returns:
int: Predicted value.
"""
prediction = agent.predict(input_feed={"val": X}, verifiable=True, job
return prediction
def create_agent(
model_id: int, version_id: int, chain: str, contracts: dict, account:
):
"""
Create a Giza agent for the volatility prediction model
"""
agent = GizaAgent(
contracts=contracts,
id=model_id,
version_id=version_id,
chain=chain,
account=account,
)
return agent
def rebalance_lp(
tokenA_amount,
tokenB_amount,
pred_model_id,
pred_version_id,
account="dev",
chain=f"ethereum:sepolia:{sepolia_rpc_url}",
nft_id=None,
):
...
agent = create_agent(
model_id=pred_model_id,
version_id=pred_version_id,
chain=chain,
contracts=contracts,
account=account,
)
result = predict(agent, X)
p g
with agent.execute() as contracts:
...
contract_result = contracts.nft_manager.mint(mint_params)
You should see the transaction receipt representing minting of a new LP position on
Uniswap V3. Congrats, you have just used an AI Agent to provide liquidity on a
decentralized exchange!
9. Conclusion
In this tutorial, we learnt how to use the Giza SDK to create an AI Agent that
automatically rebalances our Uniswap V3 LP position. The next steps would include
iterating on the volatility prediction model and refining the rebalancing logic to improve
the agent's performance.