Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Optimistic Machine Learning (OPML) framework connects off-chain machine learning computations with smart contracts. Results are provably correct and can be disputed on-chain to ensure trust and accountability, enabling efficient AI-driven decision-making in decentralized applications. OPML is a core component of the AI Oracle, playing a critical role in validating inference results.
Initiate Request: The user contract sends an inference request to AI Oracle by calling the requestCallback
function.
opML Request: AI Oracle creates an opML request based on the user contract request.
Event Emission: AI Oracle emits a requestCallback
event collected by the opML node.
ML Inference: The opML node performs the AI model computation.
Result Submission: The opML node uploads the result on-chain.
Callback Execution: The result is dispatched to the user's smart contract via a callback function.
Challenge Window: Begins after the result is submitted on-chain (step 5 above).
Verification: opML validators or any network participant can check the results and challenge the output if it is incorrect.
Result Update: If a challenge is successful, the incorrect result is updated on-chain.
Finality: After the challenge period, the result is finalized onchain and made immutable.
An AI Oracle powered by Optimistic Machine Learning (opML)
ORA's Onchain AI Oracle is a verifiable oracle protocol that allows developers to create smart contracts and applications that ingest verifiable machine learning (ML) inferences for uses on the blockchain.
AI Oracle is powered by Optimistic Machine Learning (opML). It enables a verifiable, transparent, and decentralized way to integrate advanced ML models like LLaMA 3 and Stable Diffusion into smart contracts.
It offers:
Scalability: Can run any ML model onchain without prohibitive overhead.
Efficiency: Reduces computational time and cost when compared to zkML.
Practicality: Can easily be integrated into applications by using existing blockchain infrastructure.
Verifiability: Leverages fraud proofs to provide applications and users with assured computational integrity.
AI Oracle comprises a set of smart contracts and off-chain components:
opML Contract: Handles fraud proofs and challenges to ensure on-chain verifiability of ML computations.
AIOracle Contract: Connects the opML node with on-chain callers to process ML requests and integrates various ML models.
User Contract: Customizable contracts that initiate AI inference requests and receive results from directly from AI Oracle.
ORA Nodes: Machines that run the Tora Client that interact with the ORA network. Nodes currently perform two functions: submit or validate inference results.
To showcase how AI Oracle can enhance consumer-facing products, we introduce .
Fortune Teller leverages the NestedInference smart contract, as discussed in the . This application aims to onboard new users to Web3 and demonstrate the capabilities of .
User Interaction: The application prompts users to answer specific questions posed by the Magic Wizard.
Fortune Telling: The Wizard casts its spells by requesting inferences from AI Oracle.
Response Generation: The Llama3 model generates a textual response, which is then used as a prompt for image creation via StableDiffusion.
NFT Minting: Users can mint AI-generated NFTs if they are satisfied with their fortune image.
The image on the right represents user's fortune result generated by AI Oracle:
Objective AI-Generated Insights: AI provides neutral and unbiased outputs, ensuring a fair experience across diverse use cases.
Immutable Onchain Data: Information generated is securely stored on the blockchain, making it tamper-proof and easily verifiable.
Transparent Data Generation: Utilizing opML, the entire generation process is transparent, fostering trust in the system across different applications.
When interacting with the AI Oracle, it is crucial to allocate sufficient gas for the execution of the callback function. The gas consumed during the callback depends entirely on the implementation of the aiOracleCallback
method. Its logic directly impacts gas usage - the more complex the logic, the higher the gas consumption. Carefully design your aiOracleCallback
to balance functionality and gas efficiency.
To estimate the gas required for the aiOracleCallback
in a Foundry project:
Run the following command to generate a gas report:
Locate your Prompt contract in the report and check the gas usage for the aiOracleCallback
method.
Use the calculated gas amount to set the gas limit during the initialisation of the Prompt contract (in the constructor).
Alternatively you can use other tools like Hardhat or Tenderly.
We provided for gas limit estimation in the template repository.
To execute estimation run: forge test test/EstimateGasLimit.t.sol -vvvv . Next to the callback function, you can observe the gas amount needed for execution. eg. [1657555] Prompt::aiOracleCallback -> in this case amount we can set gas limit to 1700000
The above script estimates gas necessary for Stable-Diffusion callback. Note that result will always be constant size (ipfs CID).
To estimate gas limit for Llama3, you need to change modelId and result.
💡 The maximum length of Llama3 response is 200 words string, hence we will use this string size to estimate gas limit.
Bring Your Own Model
In this tutorial we explain how to integrate your own AI model with ORA's AI Oracle. We will start by looking at repository and trying to understand what's happening there. At the end we will showcase how works, by running a simple dispute game script inside a docker container.
Understand how to transform an AI model inference code and integrate it with ORA's AI Oracle
Execute a simple dispute game and understand the process of AI inference verification.
installed
Clone repository
Navigate to cloned repository
To install the required dependencies for your project, run the following command:
If there are some missing dependencies, make sure to install them in your Python environment.
First we need to train a DNN model using Pytorch. The training part is shown in examples/mnist/trainning/mnist.ipynb.
After the training model is saved at examples/mnist/models/mnist/mnist-small.state_dict.
Position to mnist folder
Convert python model into ggml format
In order to convert AI model written in Python to ggml format we are executing python script and providing a file that stores the model as a parameter to the script. The output is the binary file in ggml format. Note that the model is saved in big-endian, making it easy to process in the big-endian MIPS-32 VM.
Next step is to write inference code in go language. Then we will transform go binary into MIPS VM executable file.
Go supports compilation to MIPS. However, the generated executable is in ELF format. We'd like to get a pure sequence of MIPS instructions instead. To build a ML program in MIPS VM execute the following steps:
Navigate to the mnist_mips directory and build go inference code
Now that we compiled our AI model and inference code into MIPS VM executable code.
First we need to specify the operating system that runs inside our container. In this case we're using ubuntu:22.04.
Then we configure ssh keys, so that docker container can clone all the required repositories.
Position to the root directory inside docker container and clone opml repository along with its submodules.
Lastly, we tell docker to build executables and run the challenge script.
In order to successfully clone opml repository, you need to generate a new ssh key and add it to your Github account. Once it's generated, save the key in the local repository where Dockerfile is placed as id_rsa
. Then add the public key to your GitHub account.
Build the docker image
Run the local Ethereum node
In another terminal run the challenge script
After executing the steps above you should be able to see interactive challenge process in the console.
Script first deploys necessary contracts to the local node. Proposer opML node executes AI inference and then the challenger nodes can dispute it, if they think that the result is not valid. Challenger and proposer are interacting in order to find the differing step between their computations. Once the dispute step is found it's sent to the smart contract for the arbitration. If the challenge is successful the proposer node gets slashed.
In this tutorial we achieved the following:
converted our AI model from Python to ggml format
compiled AI inference code written in go to MIPS VM executable format
run the dispute game inside a docker container and understood the opML verification process
Build your dApp using ORA's AI Oracle
This tutorial will help you understand the structure of the AI Oracle, guide you through the process of building a simple Prompt contract that interacts with the ORA network.
If you prefer a video version of the tutorial, check it .
Final version of the code can be found .
Setup the development environment
Understand the project setup and template repository structure
Learn how to interact with the AI Oracle and build an AI powered smart contract
To follow this tutorial you need to have and installed.
Clone template repository and install submodules
Move into the cloned repository
Copy .env.example, rename it to .env. We will need these env variables later for the deployment and testing. You can leave them empty for now.
Install foundry dependencies
At the beginning we need to import several dependencies which our smart contract will use.
IAIOracle - interface that defines a requestCallback
method that needs to be implemented in the Prompt contract
AIOracleCallbackReceiver - an abstract contract that contains an instance of AIOracle and implements a callback method that needs to be overridden in the Prompt contract
We'll start by implementing the constructor, which accepts the address of the deployed AIOracle contract.
Now let’s define a method that will interact with the AI Oracle. This method requires 2 parameters, id of the model and input prompt data. It also needs to be payable, because a user needs to pass the fee for the callback execution.
In the code above we do the following:
Convert input to bytes
Call the requestCallback function with the following parameters:
modelId: ID of the AI model in use.
input: User-provided prompt.
callbackAddress: The address of the contract that will receive AI Oracle's callback.
callbackGasLimit[modelId]: Maximum amount of that that can be spent on the callback, yet to be defined.
callbackData: Callback data that is used in the callback.
Next step is to define the mapping that keeps track of the callback gas limit for each model and set the initial values inside the constructor. We’ll also define a modifier so that only the contract owner can change these values.
We want to store all the requests that occurred. For that we can create a data structure for the request data and a mapping between requestId and the request data.
In the code snippet above we added prompt, sender and the modelId to the request and also emitted an event.
Now that we implemented a method for interaction with the AI Oracle, let's define a callback that will be invoked by the AI Oracle after the computation of the result.
We've overridden the callback function from the AIOracleCallbackReceiver.sol
. It is important to use the modifier, so that only AI Oracle can callback into our contract.
Function flow:
First we check if the request with provided id exists. If it does we add the output value to the request.
Then we define prompts
mapping that stores all the prompts and outputs for each model that we use.
At the end we emit an event that the prompt has been updated.
Notice that this function takes callbackData as the last parameter. This parameter can be used to execute arbitrary logic during the callback. It is passed during requestCallback
call. In our simple example, we left it empty.
Finally let's add the method that will estimate the fee for the callback call.
With this, we've completed with the source code for our contract. The final version should look like this:
Add your PRIVATE_KEY
, RPC_URL
and ETHERSCAN_KEY
to .env file. Then source variables in the terminal.
Create a deployment script
Then open script/Prompt.s.sol and add the following code:
Run the deployment script
Let's use Stable Diffusion model (id = 50) as an example.
First call estimateFee
method to calculate fee for the callback.
Request the AI inference from the AI Oracle by calling calculateAIResult method. Pass the model id and the prompt for the image generation. Remember to provide an estimated fee as a value for the transaction.
Install the browser wallet if you haven't already (eg. Metamask)
Copy the contract along with necessary dependencies to Remix.
Choose the solidity compiler version and compile the contract to the bytecode
Deploy the compiled bytecode Once we compiled our contract, we can deploy it.
First go to the wallet and choose the blockchain network for the deployment.
Deploy the contract by signing the transaction in the wallet.
In this tutorial we covered a step by step writing of a solidity smart contract that interacts with ORA's AI Oracle. Then we compiled and deployed our contract to the live network and interacted with it. In the next tutorial, we will extend the functionality of the Prompt contract to support AI generated NFT collections.
Text: Generated by Llama3 ()
Image: Generated by Stable Diffusion v3 ()
💡 If you're interested in creating your own frame, check out our repository. This template can help you bootstrap your application, but you'll need to modify it based on your specific use case.
We encourage you to develop your own applications. Explore ideas in the repository, whether it’s a Farcaster frame, a Telegram mini app, or any other client application that interacts with AI Oracle.
is a file format that consists of version number, followed by three components that define a large language model: the model's hyperparameters, its vocabulary, and its weights. Ggml allows for more efficient inference runs on CPU. We will now convert the Python model to ggml format by executing following steps:
Build script will compile go code and then run script that will transform compiled go code to the MIPS VM executable file.
We can test the dispute game process. We will use a from opml repository to showcase the whole verification flow.
For this part of the tutorial we will use , so make sure to have it installed.
Let's first check the content of the that we are using:
Then we need to install all the necessary dependencies in order to run .
💡 Gas Limit Estimation - It's important to define callbackGasLimit for each model to ensure that AIOracle will have enough funds to return the response! Read more on page.
Go to page and find the OAO_PROXY address for the network you want to deploy to.
Once the contract is deployed and verified, you can interact with it. Go to blockchain explorer for your chosen network (eg. ), and paste the address of the deployed Prompt contract.
After the transaction is executed, and the AI Oracle calculates the result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ). To check the image go to .
Open your solidity development environment. We'll use .
To deploy the Prompt contract we need to provide the address of already deployed AIOracle contract. You can find this address on the . We are looking for OAO_PROXY address.
Once the contract is deployed, you can interact with it. Remix supports API for interaction, but you can also use blockchain explorers like . Let's use Stable Diffusion model (id = 50).
First call estimateFee
method to calculate fee for the callback.
Then request AI inference from the AI Oracle by calling `calculateAIResult` method. Pass the model id and the prompt for the image generation. Remember to provide estimated fee as a value for the transaction.
After the transaction is executed, and the AI Oracle calculates result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ).
Source code of AI Oracle:
For supported models and deployment addresses, see page.
For example integrations and ideas to build, see .
Check out AI oracle.
Supported AI Models and Deployment Addresses of ORA's AI Oracle.
Model ID
11
Deployed Network
Ethereum & Ethereum Sepolia, Optimism & Optimism Sepolia, Arbitrum & Arbitrum Sepolia, Manta & Manta Sepolia, Linea, Base, Mantle, Polygon PoS
Fee
Mainnet: 0.0003 ETH / 3 MATIC / 3 MNT Testnet: 0.01 ETH
Usage
Model ID
13
Deployed Network
Ethereum & Ethereum Sepolia
Fee
Mainnet: 0.0003 ETH Testnet: 0.01 ETH
Usage
Model ID
14
Deployed Network
Arbitrum & Arbitrum Sepolia, Ethereum Sepolia
Fee
Mainnet: 0.0003 ETH Testnet: 0.01 ETH
Usage
Note
Input exceeds 7000 characters won't receive callback.
Model ID
15
Deployed Network
Arbitrum & Arbitrum Sepolia, Ethereum Sepolia
Fee
Mainnet: 0.0003 ETH Testnet: 0.01 ETH
Usage
Model ID
50
Deployed Network
Ethereum & Ethereum Sepolia, Optimism & Optimism Sepolia, Arbitrum & Arbitrum Sepolia, Manta & Manta Sepolia, Linea, Base, Mantle, Polygon PoS
Fee
Mainnet: 0.0003 ETH / 3 MATIC / 3 MNT Testnet: 0.01 ETH
Usage
Model ID
503
Deployed Network
Ethereum & Ethereum Sepolia, Optimism & Optimism Sepolia, Arbitrum & Arbitrum Sepolia, Manta & Manta Sepolia, Linea, Base, Mantle, Polygon PoS
Fee
Mainnet: 0.0003 ETH / 3 MATIC / 3 MNT Testnet: 0.01 ETH
Usage
ORA RMS API Models are supported on Ethereum mainnet and Base networks.
To determine the model ID of a specific model, please refer to the code below:
Prompt and SimplePrompt are both example smart contracts interacted with AI Oracle.
For simpler application scenarios (eg. Prompt Engineering based AI like GPTs), you can directly use Prompt or SimplePrompt.
SimplePrompt saves gas by only emitting the event without storing historical data.
Deprecated contracts: AIOracle, Prompt.
The AI Settlement Oracle is the first AI-powered truth machine, leveraging verifiable AI to resolve and settle factual questions onchain. It offers a trustless, autonomous system for information resolution, eliminating human error and manipulation.
Accuracy & Fairness: Immune to economic manipulation and herd behaviour typical of traditional systems.
Trustless Settlement: Ensures unbiased outcomes for any factual query.
The AI Settlement Oracle contains two parts of interaction:
Parse Context: The initial interaction processes a question by analyzing web sources and summarizing them into a coherent "truth context." This provides a reliable foundation for further reasoning.
Onchain Settlement: In the second interaction, the truth context and helper prompts are integrated into an input for the AI Oracle. The Oracle uses this input to generate a verified outcome, enabling decentralized and transparent truth settlement onchain.
Easy to use guides for non-developers
As a user, to interact with AI Oracle, you can:
Use the AI.ORA.IO frontend- view our video tutorial
Directly via the Prompt contract on Etherscan
In Prompt contract's Read Contract
section, call estimateFee
with specified modelId
.
In Prompt contract's Write Contract
section, call calculateAIResult
with fee (converted from wei to ether), prompt
, and modelId
.
In the AI Oracle contract, look for the new transaction that fulfills the request you sent.
In Prompt contract's Read Contract
section, call getAIResult
with previous modelId
and prompt
to view the AI inference result.
If you want to retrieve your historical AI inference (eg. AIGC image), you can find them on blockchain explorer:
Find your transaction for sending AI request, and enter the "Logs" tab. Example tx: https://etherscan.io/tx/0xfbfdb2efcee23197c5ea8487368a905385c84afdc465cab43bc1ad01da773404#eventlog
Access your requestId
in "Logs" tab
Example's case: "1928".
In OAO's smart contract, look for the Invoke Callback
transaction with the same requestId
.
Normally, this transaction will be around the same time as the one in step 1. To filter transactions by date, click "Advanced Filter" and then the button near "Age".
Find the transaction for AI inference result, and enter the "Logs" tab. Example tx: https://etherscan.io/tx/0xfbfdb2efcee23197c5ea8487368a905385c84afdc465cab43bc1ad01da773404#eventlog
Access output data. Example's case: "QmecBGR7dD7pRtY48FEKoeLVsmBTLwvdicWRkX9xz2NVvC" which is the IPFS hash that can be accessed with IPFS gateway)
In this tutorial, we’ll explore advanced techniques for interacting with the AI Oracle. Specifically, we’ll dive into topics like , , and Data Availability (DA) Options, giving you a deeper understanding of these features and how to leverage them effectively.
A user can perform nested inference by initiating a second inference based on the result of the first inference within a smart contract. This action can be completed atomically and is not restricted to a two-step function.
Some of the use cases for a nested inference call include:
generating a prompt with LLM for AIGC (AI Generated Content) NFT
extracting data from a data set, then generate visual data with different models
adding transcript to a video, then translate it to different languages with different models
For demo purposes we built a that uses ORA's AI Oracle.
The idea of Nested Inference contract is to execute multiple inference requests in 1 transaction. We'll modify contract to support nested inference request. In our example, it will call Llama3 model first, then use inference result as the prompt to another request to StableDiffusion model.
The main goal of this tutorial is to understand what changes we need to make to Prompt contract in order to implement logic for various use cases.
modify CalculateAIResult
method to support multiple requests
modify aiOracleCallback
with the logic to handle second inference request
💡 When estimating gas cost for the callback, we should take both models into the consideration.
As we now have additional function parameter for second model id. Not that we encode and forward model2Id
as a callback data in aiOracle.requestCallback
call.
The main change here is the within "if" block. If the callback data (model2Id
) is returned, we want to execute second inference request to the AI Oracle.
Output from the first inference call, will be passed to second one. This allows for interesting use cases, where you can combine text-to-text (eg. Llama3) and text-to-image (eg. Stable-Diffusion) models.
If nested inference call is not successful the whole function will revert.
💡 When interacting with the contract from the client side, we need to pass cumulative fee (for both models), then for each inference call we need to pass part of that cumulative fee. This is why we are calling estimateFee
for model2Id
.
This is an example of contract interaction from Foundry testing environment. Note that we're estimating fee for both models and passing cumulative amount during the function call (we're passing slightly more to ensure that the call will execute if the gas price changes).
The Batch Inference feature enables sending multiple inference requests within a single transaction, reducing costs by saving on network fees and improving the user experience. This bulk processing allows for more efficient handling of requests and results, making it easier to manage the state of multiple queries simultaneously.
Some of the use cases might be:
AIGC NFT marketplace - creating a whole AIGC NFT collection with just one transaction, instead of creating those with many transactions
Apps that need to handle requests simultaneously - Good example would be a recommendation system or chatbot with high TPS
Callback transaction fee is needed for AI Oracle to submit callback transaction. It's calculated by multiplying current gas price with callback gas limit for invoked model (gasPrice * callbackGasLimit
).
Request transaction fee is regular blockchain fee needed to request inference by invoking aiOracle.requestCallback
method.
Total fee is calculated as sum of Model fee, Callback transaction fee and Request transaction fee.
In order to initiate batch inference request, we will interact with requestBatchInference
method. This method takes additional batchSize parameter, which specifies the amount of requests to the AI Oracle.
Input for batch inference should be structured as a string representing an array of prompt and seed values.
Prompt - string value that is mandatory in order to prompt AI Oracle.
Seed – an optional numeric value that, when used, allows you to generate slightly varied responses for the same prompt.
Result of Batch inference call is a dot separated list of inference results.
This is the prompt for interacting with Stable Diffusion model:
Result is dot separated list of ipfs CIDs:
To test it, you need to:
create new javascript file
copy the script and add env variables
create and deploy prompt contract that supports batch inference
add values to batchInference_abi and batchInference_address
Prompt examples
Input: Prompt (). Output: Inference result ().
Input: Prompt (). Output: Inference result ().
Input: Instruction and prompt with the format of JSON String: {"instruction":"${instruction}","input":"${prompt}"}
(). Default instruction (if sending raw prompt only) is You are a helpful assistant
.
Output: Inference result ().
Input: Instruction and prompt with the format of JSON String: {"instruction":"${instruction}","input":"${prompt}"}
(). Default instruction (if sending raw prompt only) is You are a helpful assistant
.
Output: Inference result ().
Input: Prompt (). Output: IPFS hash of inference result (). Access with IPFS gateway, see .
Input: Prompt (). Output: IPFS hash of inference result (). Access with IPFS gateway, see .
You can also check the full implementation of .
Model fee required for batch inference request is calculated by multiplying single model fee with batch size (batchSize * model.fee
). This fee covers the operational costs of running AI models, with a portion contributing to protocol revenue. For details on the required fees for each model, visit the page.
Note that we'll need to pass more gas to cover AI Oracle callback execution, depending on the batchSize (check out ). For this purpose we can implement estimateFeeBatch
function in our contract. This method will interact with estimateFeeBatch
method from AIOracle.sol.
When performing batch inference with the AI Oracle, ensure the prompt is following the . Below is a simple script for interacting with batch inference:
That's it! With few simple changes to the contract we utilised batch inference feature. This allowed us to get multiple responses with only one transaction.
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt