In this section, we provide a list of educational tutorials, that will help you get started with Onchain AI Oracle (OAO).
Interaction with OAO - Tutorial covers step by step creation of simple Prompt contract that interacts with OAO. Integration into OAO - Tutorial covers the process of creating model for your AI model and integration into OAO.
Bring Your Own Model into OAO
In this tutorial we explain how to integrate your own AI model into Onchain AI Oracle (OAO). We will start by looking at mlgo repository and trying to understand what's happening there. At the end we will showcase how opML works, by running a simple dispute game script inside a docker container.
Understand how to transform AI model and inference code in order to integrate it into Onchain AI Oracle (OAO).
Execute a simple dispute game and understand the process of AI inference verification.
git installed
Clone mlgo repository
Navigate to cloned repository
To install the required dependencies for your project, run the following command:
If there are some missing dependencies, make sure to install them in your Python environment.
First we need to train a DNN model using Pytorch. The training part is shown in examples/mnist/trainning/mnist.ipynb.
After the training model is saved at examples/mnist/models/mnist/mnist-small.state_dict.
Ggml is a file format that consists of version number, followed by three components that define a large language model: the model's hyperparameters, its vocabulary, and its weights. Ggml allows for more efficient inference runs on CPU. We will now convert the Python model to ggml format by executing following steps:
Position to mnist folder
Convert python model into ggml format
In order to convert AI model written in Python to ggml format we are executing python script and providing a file that stores the model as a parameter to the script. The output is the binary file in ggml format. Note that the model is saved in big-endian, making it easy to process in the big-endian MIPS-32 VM.
Next step is to write inference code in go language. Then we will transform go binary into MIPS VM executable file.
Go supports compilation to MIPS. However, the generated executable is in ELF format. We'd like to get a pure sequence of MIPS instructions instead. To build a ML program in MIPS VM execute the following steps:
Navigate to the mnist_mips directory and build go inference code
Build script will compile go code and then run compile.py script that will transform compiled go code to the MIPS VM executable file.
Now that we compiled our AI model and inference code into MIPS VM executable code.
We can test the dispute game process. We will use a bash script from opml repository to showcase the whole verification flow.
For this part of the tutorial we will use Docker, so make sure to have it installed.
Let's first check the content of the Dockerfile that we are using:
First we need to specify the operating system that runs inside our container. In this case we're using ubuntu:22.04.
Then we need to install all the necessary dependencies in order to run dispute game script.
Then we configure ssh keys, so that docker container can clone all the required repositories.
Position to the root directory inside docker container and clone opml repository along with its submodules.
Lastly, we tell docker to build executables and run the challenge script.
In order to successfully clone opml repository, you need to generate a new ssh key and add it to your Github account. Once it's generated, save the key in the local repository where Dockerfile is placed as id_rsa
. Then add the public key to your GitHub account.
Build the docker image
Run the local Ethereum node
In another terminal run the challenge script
After executing the steps above you should be able to see interactive challenge process in the console.
Script first deploys necessary contracts to the local node. Proposer opML node executes AI inference and then the challenger nodes can dispute it, if they think that the result is not valid. Challenger and proposer are interacting in order to find the differing step between their computations. Once the dispute step is found it's sent to the smart contract for the arbitration. If the challenge is successful the proposer node gets slashed.
In this tutorial we achieved the following:
converted our AI model from Python to ggml format
compiled AI inference code written in go to MIPS VM executable format
run the dispute game inside a docker container and understood the opML verification process
In order to use your AI model onchain, you need to run your own opML nodes, then this AI model will be able to integrated into OAO. Try to reproduce this tutorial with your own model.
This tutorial will help you understand the structure of Onchain AI Oracle (OAO), guide you through the process of building a simple Prompt contract that interacts with OAO. We will implement the contract step by step. At the end we will deploy the contract to the blockchain network and interact with it.
If you prefer a video version of the tutorial, check it here.
Final version of the code can be found here.
Setup development environment
Understand the project setup and template repository structure
Learn how to interact with the OAO and build an AI powered smart contract
To follow this tutorial you need to have Foundry and git installed.
Clone template repository and install submodules
Move into the cloned repository
Copy .env.example, rename it to .env. We will need these env variables later for the deployment and testing. You can leave them empty for now.
Install foundry dependencies
At the beginning we need to import several dependencies which our smart contract will use.
IAIOracle - interface that defines a requestCallback
method that needs to be implemented in the Prompt contract
AIOracleCallbackReceiver - an abstract contract that contains an instance of AIOracle and implements a callback method that needs to be overridden in the Prompt contract
We'll start by implementing the constructor, which accepts the address of the deployed AIOracle contract.
Now let’s define a method that will interact with the OAO. This method takes 2 parameters, id of the model and input prompt data. It also needs to be payable, because a user needs to pass the fee for the callback execution.
In the code above we do the following:
Convert input to bytes
Call the requestCallback function with the following parameters:
modelId: ID of the AI model in use.
input: User-provided prompt.
callbackAddress: The address of the contract that will receive OAO's callback.
callbackGasLimit[modelId]: Maximum amount of that that can be spent on the callback, yet to be defined.
callbackData: Callback data that is used in the callback.
Next step is to define the mapping that keeps track of the callback gas limit for each model and set the initial values inside the constructor. We’ll also define a modifier so that only the contract owner can change these values.
We want to store all the requests that happened, so we create a data structure for the request data and the mapping between requestId and the request data.
In the code snippet above we added prompt, sender and the modelId to the request and also emitted an event.
Now that we implemented a method for interaction with the OAO, let's define a callback that will be invoked by the OAO after the computation of the result.
We've overridden the callback function from the AIOracleCallbackReceiver.sol
. It's important to use the modifier, so that only OAO can callback into our contract.
Function flow:
First we check if the request with provided id exists. If it does we add the output value to the request.
Then we define prompts
mapping that stores all the prompts and outputs for each model that we use.
At the end we emit an event that the prompt has been updated.
Notice that this function takes callbackData as the last parameter. This parameter can be used to execute arbitrary logic during the callback. It is passed during requestCallback
call. In our simple example, we left it empty.
Finally let's add the method that will estimate the fee for the callback call.
With this we finished with the source code for our contract. The final version should look like this:
Add your PRIVATE_KEY
, RPC_URL
and ETHERSCAN_KEY
to .env file. Then source variables in the terminal.
Create a deployment script
Go to Reference page and find the OAO_PROXY address for the network you want to deploy to.
Then open script/Prompt.s.sol and add the following code:
Run the deployment script
Once the contract is deployed and verified, you can interact with it. Go to blockchain explorer for your chosen network (eg. Etherscan), and paste the address of the deployed Prompt contract.
Let's use Stable Diffusion model (id = 50).
First call estimateFee
method to calculate fee for the callback.
Then request AI inference from OAO by calling calculateAIResult method. Pass the model id and the prompt for the image generation. Remember to provide estimated fee as a value for the transaction.
After the transaction is executed, and the OAO calculates the result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ipfs). To check the image go to https://ipfs.io/ipfs/[Replace_your_CID].
Install the browser wallet if you haven't already (eg. Metamask)
Open your solidity development environment. We'll use Remix IDE.
Copy the contract along with necessary dependencies to Remix.
Choose the solidity compiler version and compile the contract to the bytecode
Deploy the compiled bytecode Once we compiled our contract, we can deploy it.
First go to the wallet and choose the blockchain network for the deployment.
To deploy the Prompt contract we need to provide the address of already deployed AIOracle contract. You can find this address on the reference page. We are looking for OAO_PROXY address.
Deploy the contract by signing the transaction in the wallet.
Once the contract is deployed, you can interact with it. Remix supports API for interaction, but you can also use blockchain explorers like Etherscan. Let's use Stable Diffusion model (id = 50).
In this tutorial we covered a step by step writing of a solidity smart contract that interacts with ORA's Onchain AI Oracle. Then we compiled and deployed our contract to the live network and interacted with it. In the next tutorial, we will extend the functionality of the Prompt contract to support AI generated NFT collections.
First call estimateFee
method to calculate fee for the callback.
Then request AI inference from OAO by calling `calculateAIResult` method. Pass the model id and the prompt for the image generation. Remember to provide estimated fee as a value for the transaction.
After the transaction is executed, and the OAO calculates result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ipfs).