Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Verifiable Oracle Protocol
Main product of ORA: Onchain AI Oracle (OAO), brings AI onchain.
ORA breaks down the limitations of smart contracts by offering AI inference, so developers can innovate freely.
ORA’s work has been trusted by Compound, Ethereum Foundation, Uniswap, Optimism, Arbitrum, and beyond.
We build, they follow: We are the last oracle for developers. We are the only oracle that has shipped an AI Oracle which is practical to use on Ethereum right now.
Onchain AI engine: Our AI Oracle currently supports LlaMA 2 (7B) and Stable Diffusion. You can use and integrate them onchain directly. In the future, we will support any ML model.
Fast deployment: We are coming to networks near you, including any network and any L2. You can build your own AI oracle that is programmable, permissionless, and censorship-resistant.
ORA is Ethereum's Trustless AI. ORA is the verifiable oracle protocol that brings AI and complex compute onchain.
ai/fi = AI + DeFi
The strategy to transform DeFi into ai/fi is simple: "identify the computationally intensive part of any DeFi protocol and replace it with AI."
ai/fi is the fusion of AI (verifiable AI inference provided by ) and DeFi.
Read more from ai/fi idea posts: , .
OAO integrates different AI models onchain in ORA AI Oracle nodes.
Smart contract developers can build their own contract based on different models in AI Oracle, to interact with OAO contract, so that they can use AI onchain.
A vision initiated by ORA
World Supercomputer is a set of topologically heterogenous peer-to-peer networks connected by a secure data bus. This is a new concept of a global network, first introduced by ORA.
While Ethereum consensus maintains its global ledger, the specialized networks scale computing power and storage capacity. For example, ORA serves the role of scaling computation and AI capabilities of the network.
ORA is committed to supporting Ethereum as the World Computer and its ecosystem for the future of DeFi, zkML, AI x Crypto, etc.
We hold regular World Supercomputer Summits. Check out the recaps:
zkML of (the most battle-tested and performant zkML framework)
of (run huge model like LlaMA2-7B and Stable Diffusion now)
zk+opML with (futuristic onchain AI fuses zkML's privacy and opML's scalability)
is the ORA's verifiable and decentralized AI oracle.
Some of the example use cases are: AIGC NFT with ERC-7007, zkKYC using facial recognition based on ML, onchain AI games (e.g. Dungeon and Dragons), prediction market with ML, content authenticity (deepfake verifier), compliant programmable privacy, prompt marketplace, reputation/credit scoring... For example integrations and ideas to build, see .
For more details, check out World Supercomputer's .
In this section, we provide a list of educational tutorials, that will help you get started with Onchain AI Oracle (OAO).
covers step by step creation of simple Prompt contract that interacts with OAO. covers the process of creating model for your AI model and integration into OAO.
ORA provides developers with the tools necessary to build end-to-end trustless and decentralized applications empowered by Artificial Intelligence.
ORA's two main offerings are:
Initial Model Offering (IMO) incentivizes long-term and sustainable open-source contribution, by tokenizing ownership of open-source AI models. It allows funding for AI development, and rewards the community and open-source contributors. Token holders receive a portion of the revenue generated by onchain usage of the model.
Onchain AI Oracle (OAO) brings verifiable AI inference onchain. It empowers decentralized applications by providing trustless and permissionless use of AI. This opens up a variety of new use-cases, that weren't possible before. Use of Onchain AI Oracle implies a fee, which is later distributed to IMO token holders.
Together IMO and OAO are pushing the boundaries of open-source development.
Developers can utilize Onchain AI Oracle (OAO) to supercharge their smart contracts with AI. Key features of OAO include:
All-in-one infrastructure with AI capability and automation
Higher performance and shorter finality
Run arbitrary program and any large ML model
ORA Oracle Network consists of node operators who run AI Oracle nodes to execute and secure computations with verifiable proofs. Some of the key advantages of oracle network include:
Unstoppable autonomous protocol and network
Optimal cryptography-native decentralization
Verifiable, decentralized, and secure network
Safeguarding the security of the base layer
Efficient allocation of computing power
1-of-N trust model
This tutorial will help you understand the structure of Onchain AI Oracle (OAO), guide you through the process of building a simple Prompt contract that interacts with OAO. We will implement the contract step by step. At the end we will deploy the contract to the blockchain network and interact with it.
Setup development environment
Understand the project setup and template repository structure
Learn how to interact with the OAO and build an AI powered smart contract
Clone template repository and install submodules
Move into the cloned repository
Copy .env.example, rename it to .env. We will need these env variables later for the deployment and testing. You can leave them empty for now.
Install foundry dependencies
At the beginning we need to import several dependencies which our smart contract will use.
IAIOracle - interface that defines a requestCallback
method that needs to be implemented in the Prompt contract
AIOracleCallbackReceiver - an abstract contract that contains an instance of AIOracle and implements a callback method that needs to be overridden in the Prompt contract
We'll start by implementing the constructor, which accepts the address of the deployed AIOracle contract.
Now let’s define a method that will interact with the OAO. This method takes 2 parameters, id of the model and input prompt data. It also needs to be payable, because a user needs to pass the fee for the callback execution.
In the code above we do the following:
Convert input to bytes
Call the requestCallback function with the following parameters:
modelId: ID of the AI model in use.
input: User-provided prompt.
callbackAddress: The address of the contract that will receive OAO's callback.
callbackGasLimit[modelId]: Maximum amount of that that can be spent on the callback, yet to be defined.
callbackData: Callback data that is used in the callback.
Next step is to define the mapping that keeps track of the callback gas limit for each model and set the initial values inside the constructor. We’ll also define a modifier so that only the contract owner can change these values.
We want to store all the requests that happened, so we create a data structure for the request data and the mapping between requestId and the request data.
In the code snippet above we added prompt, sender and the modelId to the request and also emitted an event.
Now that we implemented a method for interaction with the OAO, let's define a callback that will be invoked by the OAO after the computation of the result.
We've overridden the callback function from the AIOracleCallbackReceiver.sol
. It's important to use the modifier, so that only OAO can callback into our contract.
Function flow:
First we check if the request with provided id exists. If it does we add the output value to the request.
Then we define prompts
mapping that stores all the prompts and outputs for each model that we use.
At the end we emit an event that the prompt has been updated.
Notice that this function takes callbackData as the last parameter. This parameter can be used to execute arbitrary logic during the callback. It is passed during requestCallback
call. In our simple example, we left it empty.
Finally let's add the method that will estimate the fee for the callback call.
With this we finished with the source code for our contract. The final version should look like this:
Add your PRIVATE_KEY
, RPC_URL
and ETHERSCAN_KEY
to .env file. Then source variables in the terminal.
Create a deployment script
Then open script/Prompt.s.sol and add the following code:
Run the deployment script
Let's use Stable Diffusion model (id = 50).
First call estimateFee
method to calculate fee for the callback.
Then request AI inference from OAO by calling calculateAIResult method. Pass the model id and the prompt for the image generation. Remember to provide estimated fee as a value for the transaction.
Install the browser wallet if you haven't already (eg. Metamask)
Copy the contract along with necessary dependencies to Remix.
Choose the solidity compiler version and compile the contract to the bytecode
Deploy the compiled bytecode Once we compiled our contract, we can deploy it.
First go to the wallet and choose the blockchain network for the deployment.
Deploy the contract by signing the transaction in the wallet.
In this tutorial we covered a step by step writing of a solidity smart contract that interacts with ORA's Onchain AI Oracle. Then we compiled and deployed our contract to the live network and interacted with it. In the next tutorial, we will extend the functionality of the Prompt contract to support AI generated NFT collections.
AI development experience
The user contract sends the AI request to OAO on chain, by calling requestCallback
function on the OAO contract.
Each AI request will initiate an opML request.
OAO will emit a requestCallback
event which will be collected by opML node.
opML node will run the AI inference, and then upload the result on chain, waiting for the challenge period.
During the challenge period, the opML validators will check the result, and challenge it if the submitted result is incorrect.
If the submitted result is successfully challenged by one of the validators, the submitted result will be updated on chain.
After the challenge period, the submitted result on chain is finalized.
When the result is uploaded or updated on chain, the provided result in opML will be dispatched to the user's smart contract via its specific callback function.
To integrate with OAO, you will need to write your own contract.
Inherit AIOracleCallbackReceiver
in your contract and bind with a specific OAO address:
Write your callback function to handle the AI result from OAO. Note that only OAO can call this function:
When you want to initiate an AI inference request, call OAO as follows:
Usage of OAO requires fee for each request.
It is required to obtain the current fee by calling estimateFee
in OAO or your integrated contract with specific model id, then proceed to send request.
Flow of getting and setting fee is:
Call to read estimateFee
(on OAO or your contract if implemented) for inference with certain model
Usage Guide for Anyone to Use Onchain AI Directly as User
As a user, to interact with AI Oracle, you can:
Interact with Prompt contract directly on Etherscan
We built an interface for users to interact with Onchain AI Oracle directly.
Enter your prompt
Send transaction
See AI inference result
Here's the guide to use AI Oracle by interacting with Prompt contract using Etherscan:
In Prompt contract's Read Contract
section, call estimateFee
with specified modelId
.
In Prompt contract's Write Contract
section, call calculateAIResult
with fee (converted from wei to ether), prompt
, and modelId
.
In AIOracle contract, watch for new transaction that fulfills the request you sent.
In Prompt contract's Read Contract
section, call getAIResult
with previous modelId
and prompt
to see the AI inference result.
If you prefer a video version of the tutorial, check it .
Final version of the code can be found .
To follow this tutorial you need to have and installed.
Go to page and find the OAO_PROXY address for the network you want to deploy to.
Once the contract is deployed and verified, you can interact with it. Go to blockchain explorer for your chosen network (eg. ), and paste the address of the deployed Prompt contract.
After the transaction is executed, and the OAO calculates the result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ). To check the image go to .
Open your solidity development environment. We'll use .
To deploy the Prompt contract we need to provide the address of already deployed AIOracle contract. You can find this address on the . We are looking for OAO_PROXY address.
Once the contract is deployed, you can interact with it. Remix supports API for interaction, but you can also use blockchain explorers like . Let's use Stable Diffusion model (id = 50).
First call estimateFee
method to calculate fee for the callback.
Then request AI inference from OAO by calling `calculateAIResult` method. Pass the model id and the prompt for the image generation. Remember to provide estimated fee as a value for the transaction.
After the transaction is executed, and the OAO calculates result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ).
Basic knowledge of Ethereum smart contract development ()
Source code of OAO:
For supported models in OAO and deployment addresses, see page.
For example integrations and ideas to build, see .
Check out .
To build with AI models of OAO, we provided an example of integration to LlaMA2 model: .
Get estimated fee in wei (eg. )
Call to write and request AI inference on your contract, and fill in estimated fee in ether (eg. )
Use frontend.
Go to
Check out the if you have any question.
Bring Your Own Model into OAO
Understand how to transform AI model and inference code in order to integrate it into Onchain AI Oracle (OAO).
Execute a simple dispute game and understand the process of AI inference verification.
Navigate to cloned repository
To install the required dependencies for your project, run the following command:
If there are some missing dependencies, make sure to install them in your Python environment.
First we need to train a DNN model using Pytorch. The training part is shown in examples/mnist/trainning/mnist.ipynb.
After the training model is saved at examples/mnist/models/mnist/mnist-small.state_dict.
Position to mnist folder
Convert python model into ggml format
In order to convert AI model written in Python to ggml format we are executing python script and providing a file that stores the model as a parameter to the script. The output is the binary file in ggml format. Note that the model is saved in big-endian, making it easy to process in the big-endian MIPS-32 VM.
Next step is to write inference code in go language. Then we will transform go binary into MIPS VM executable file.
Go supports compilation to MIPS. However, the generated executable is in ELF format. We'd like to get a pure sequence of MIPS instructions instead. To build a ML program in MIPS VM execute the following steps:
Navigate to the mnist_mips directory and build go inference code
Now that we compiled our AI model and inference code into MIPS VM executable code.
First we need to specify the operating system that runs inside our container. In this case we're using ubuntu:22.04.
Then we configure ssh keys, so that docker container can clone all the required repositories.
Position to the root directory inside docker container and clone opml repository along with its submodules.
Lastly, we tell docker to build executables and run the challenge script.
In order to successfully clone opml repository, you need to generate a new ssh key and add it to your Github account. Once it's generated, save the key in the local repository where Dockerfile is placed as id_rsa
. Then add the public key to your GitHub account.
Build the docker image
Run the local Ethereum node
In another terminal run the challenge script
After executing the steps above you should be able to see interactive challenge process in the console.
Script first deploys necessary contracts to the local node. Proposer opML node executes AI inference and then the challenger nodes can dispute it, if they think that the result is not valid. Challenger and proposer are interacting in order to find the differing step between their computations. Once the dispute step is found it's sent to the smart contract for the arbitration. If the challenge is successful the proposer node gets slashed.
In this tutorial we achieved the following:
converted our AI model from Python to ggml format
compiled AI inference code written in go to MIPS VM executable format
run the dispute game inside a docker container and understood the opML verification process
In order to use your AI model onchain, you need to run your own opML nodes, then this AI model will be able to integrated into OAO. Try to reproduce this tutorial with your own model.
If you want to retrieve your historical AI inference (eg. AIGC image), you can find them on blockchain explorer:
Access your requestId
in "Logs" tab
Example's case: "1928".
Initial Model Offering
TL;DR: IMO tokenizes AI model onchain.
For AI models, IMO enables sustainable funding for open-source AI models.
For ecosystems, IMO helps align values and incentives for distribution and ongoing contribution.
Many open-sourced AI models face the challenge in monetizing their contributions, leading to a lack of motivation for contributors and organizations alike. As a result, the AI industry is currently led by closed-source, for-profit companies. The winning formula for open-source AI models is the need to gather more funding and build in public.
With IMO, we can win the fight for open-source AI. IMO can enable the sustainability of the open-source AI model’s ecosystem by fostering long-term benefits and encouraging engagement and funding to the open-source AI community. The win is when we have better open-source models than proprietary models.
IMO tokenizes the ownership of open-source AI models, sharing its profit to the token holders.
Learn more:
Permissionless Technology for AI x Crypto
IMO, introduced by ORA, is permissionless, so anyone and any community can carry out an IMO for their AI model.
IMO tokenizes specific AI models, which gives:
The community the ability to efficiently fundraise for open-source.
Contributors incentives to continue improving a globally accessible model.
Token holders revenue opportunities from the use of the model onchain.
IMO is steering us to a future where AI is sustainable, diverse and open for all.
Based on the framework of IMO, the ecosystem is evolving towards a more structured and layered approach in AI x Crypto advancement.
The foundation begins with IMO (Initial Model Offering), focusing on the tokenization of foundation models and decentralized networks.
Moving up, the IAO (Initial Agent Offering) introduces fine-tuned models tailored for specific tasks, enhancing the adaptability and precision of AI agents.
At the apex, Inference Assets represent the assets of verifiable decentralized inference, ensuring that the AI DApps operate with integrity and transparency.
These assets, derived from onchain AI, offer additional revenue streams for onchain AI through mechanisms such as royalty fees associated with these inference asset NFTs.
https://www.ora.io/app/tasks
ORA Points Program is focused on identifying contributors who engage with ORA ecosystem. 10% of ORA token will be distributed to ORA Points Program.
Points will act as an indicator for us to identify the most committed for their early support of ORA’s ecosystem as it evolves.
You can earn points in the following ways:
Staking in ORA's Staking
Participate in ORA's IMO
...more ways coming
In this tutorial we explain how to integrate your own AI model into Onchain AI Oracle (OAO). We will start by looking at repository and trying to understand what's happening there. At the end we will showcase how works, by running a simple dispute game script inside a docker container.
installed
Clone repository
is a file format that consists of version number, followed by three components that define a large language model: the model's hyperparameters, its vocabulary, and its weights. Ggml allows for more efficient inference runs on CPU. We will now convert the Python model to ggml format by executing following steps:
Build script will compile go code and then run script that will transform compiled go code to the MIPS VM executable file.
We can test the dispute game process. We will use a from opml repository to showcase the whole verification flow.
For this part of the tutorial we will use , so make sure to have it installed.
Let's first check the content of the that we are using:
Then we need to install all the necessary dependencies in order to run .
Find your transaction for sending AI request, and enter the "Logs" tab. Example tx:
In , look for the Invoke Callback
transaction with the same requestId
.
Normally, this transaction will be around the same time as the one in step 1. To filter transactions by date, click "Advanced Filter" and then the button near "Age".
Find the transaction for AI inference result, and enter the "Logs" tab. Example tx:
Access output data. Example's case: "QmecBGR7dD7pRtY48FEKoeLVsmBTLwvdicWRkX9xz2NVvC" which is the IPFS hash that can be )
In this age of AI, ORA is introducing a new mechanism, .
For token holders, IMO lets anyone capture the value of AI models onchain, from sources including onchain revenue and inference assets (eg. ).
Blog -
Announcement -
ETHDenver Talk -
Inference Assets are tokenized representations (usually compatible with ) of AI inference results. For instance, an NFT featuring AI-generated image from onchain Stable Diffusion model is considered an inference asset.
For more information, check out .
See your points at .
Interact with OAO on networks including Ethereum/Optimism mainnet through
*: Return value is IPFS Hash (access with IPFS gateway, see ).
and are both example smart contracts interacted with OAO.
Deprecated contracts: , .
11
LlaMA 3 (8B)
Mainnet: 0.0003 ETH / 3 MATIC Testnet: 0.01 ETH
Ethereum, Optimism, Arbitrum, Manta, Linea, Base
13
OpenLM (1B)
Mainnet: 0.0003 ETH Testnet: 0.01 ETH
Ethereum
50
Stable Diffusion*
Mainnet: 0.0003 ETH / 3 MATIC Testnet: 0.01 ETH
Ethereum, Optimism, Arbitrum, Manta, Linea, Base
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
OAO Proxy
Prompt
SimplePrompt
Here is an updated list of active tasks you can do to earn points.
3 - 6 Points Per Usage, Repeatable Task
ORA OAO enables verifiable, decentralized AI inference onchain. You can call AI directly onchain through OAO.
To complete:
Select a model to use
Sign transaction for calling AI request
You will earn more points if you are using a model tokenized with IMO (right now, there's one available, OLM)
10% of Referee’s Points (not including Twitter Reward and Referee Reward) as Referrer, 5 Points as Referee, Repeatable for Referring others
The ORA Points Program comes with a Referral System. There are 2 ways to partake: i) Join as a referee through other people’s referral code or ii) become a referrer and share ORA Points Program with others.
Here are the steps for confirming your referee status:
Fill in your referral code in “Enter Code” section
You will receive 5 Points for this confirmation as referee
Here are the steps for referring others:
Locate your referral code on Dashboard
Distribute referral code to others
Once others used your referral code, you will get 10% of the referee’s points continuously
In the future, we will enable more tasks for users to learn about and participate in ORA’s ecosystem.
Here are some tips to take part in the ORA Points Program effectively:
Try and call all models on all chains at different cost and features.
Spread the word and share your referral code widely.
Encourage your referees to become an active participant since points equivalent to 10% of theirs will be credited to you.
Enable IMO with Onchain AI Model and Revenue Sharing Token
IMO requires two core components:
Onchain AI Model with Verifiability
Revenue Sharing of Onchain Usage
IMO need a way to run AI models fully on chain and verifiably.
Currently, opML is at the core of OAO (Onchain AI Oracle), which is essential to bring AI models to IMO.
Holders of IMO tokens will receive the benefits of revenue streams including but not limited to:
Revenue of model usage (Model Ownership, ERC-7641 Intrinsic RevShare Token): Each use of the AI model onchain will incur a fee, which will be distributed to IMO tokens.
Revenue of AI-generated content (Inference Asset, eg. ERC-7007 Zero-Knowledge AI-Generated Content Token): Each use of the AI model generates a specific output and result (e.g. Stable Diffusion for an image NFT and Sora for a video NFT), which may carry a royalty fee and a mint fee that can be distributed to IMO tokens.
Go to the
Use OAO on mainnets (eg. Ethereum mainnet, Optimism mainnet…). Follow if you have questions.
Once AI results are returned on the same page, you can access and see your score in the .
Go to the
Go to the
See for more information.
We may host special challenges featuring bonus points. Stay up-to-date with our official communications on .
Regularly check the page for new developments and additional tasks.
If you experience any issues, feel free to reach out on .
We invented and , the only two solutions to make any AI model onchain.
We standardized to achieve revenue sharing of IMO model token.
Opp/ai, as the latest fusion of zkML and opML, can include any zkML approach. It means that advances in zkML will be directly reflected in opp/ai.
Opp/ai can be utilized to conceal the fine-tuning weights of models where the majority of the weights are already publicly available. This is particularly relevant for open-source models that have been fine-tuned for specialized tasks. For instance, the LoRA weights in the attention layers of the Stable Diffusion model can be protected using opp/ai framework.
This capability is crucial for preserving the proprietary enhancements made to publicly shared models, ensuring that while the base model remains accessible, the unique adaptations that provide competitive advantage remain confidential.
Individual voice tuning in text-to-voice models: Text-to-voice service providers may offer personalized voice models that are tailored to the individual's voice characteristics. These personalized models are sensitive and contain valuable data. The opp/ai framework can ensure that the personalized voice model's parameters remain confidential while still offering the service to end-users verifiably.
Financial sector: Trading algorithms are developed to predict market movements and execute trades automatically. These algorithms are highly valuable and contain sensitive strategies that firms wish to protect. A financial institution could use the opp/ai framework to conceal the weights of a model that has been specifically tuned to its trading strategy.
Gaming industry: AI models are used to create challenging and engaging non-player characters (NPCs). Game developers may fine-tune these models to create unique behaviors or strategies that are specific to their game. By using the opp/ai framework, developers can hide the fine-tuned weights that contribute to the NPCs' competitive edge, preventing other developers from copying these features while still providing an immersive gaming experience.
, invented by ORA, represents an endgame onchain AI framework and an innovative approach to addressing the challenges of privacy and computational efficiency in blockchain-based machine learning systems. Opp/ai integrates Zero-Knowledge Machine Learning (zkML) for privacy with Optimistic Machine Learning (opML) for efficiency, creating a hybrid model tailored for onchain AI.
Research paper on .
To establish a verifiable and decentralized oracle network, it's critical to ensure the computation validity of results on the blockchain. This process involves a proof system that ensures the computation is reliable and truthful. By doing so, we can enhance the integrity and trustworthiness of decentralized applications that rely on any-size compute, including AI inference.
Several technologies invented and developed by ORA have emerged to facilitate the verifiable computation including AI inference on the blockchain. These innovations include Optimistic Machine Learning (opML), Keras2Circom (Zero-Knowledge Machine Learning, zkML), and Optimistic Privacy Preserving AI (opp/ai), with each representing a significant stride towards integrating verifiable proofs into the blockchain.
OpML is comprised from the following key components:
Fraud Proof Virtual Machine (Off-chain VM): A robust off-chain engine responsible for executing machine learning inference. This component executes machine learning inference, generating new VM states as outputs. When discrepancies occur, manifested as different VM states, the MIPS VM employs a bisection method to pinpoint the exact step, or instruction, where the divergence begins.
opML Smart Contracts (On-chain VM) : Utilized for the verification of computational results, ensuring the accuracy of the off-chain computation. These contracts allow the execution of a single MIPS instruction, enabling the on-chain environment to verify specific steps in the computation process. This capability is vital for resolving disputes and ensuring the integrity of the off-chain computation.
Fraud Proofs: In the event of a dispute, fraud proofs generated by the verifier serve as conclusive evidence, illustrating the discrepancy in computation and facilitating the resolution process through the opML smart contracts.
Verification game is the process where two or more parties are assumed to execute the same program. Then, the parties can challenge each other with a pinpoint style to locate the disputable step. This step is sent to the smart contract for the verification.
For the system to work as intended it's important to ensure:
Deterministic ML execution
opML ensures consistent ML execution by using fixed-point arithmetic and software-based floating-points, eliminating randomness and achieving deterministic outcomes with a state transition function.
Separate Execution from Proving
opML utilizes a dual-compilation method: one for optimized native execution and another for fraud-proof VM instructions for secure verification. This ensures both fast execution and reliable, machine-independent proof.
Efficiency of AI model inference in VM
The existing fraud proof systems that are widely adopted in the optimistic rollup systems need to cross-compile the whole computation into fraud proof VM instructions, which will result in inefficient execution and huge memory consumption. opML proposes a novel multi-phase protocol, which allows semi-native execution and lazy loading, which greatly speeds up the fraud proof process.
The requester first initiates an ML service task.
The server then finishes the ML service task and commits results on chain.
The verifier will validate the results. Suppose there exists a verifier who declares the results are wrong. It starts a verification game with verification game (bisection protocol) with the server and tries to disprove the claim by pinpointing one concrete erroneous step.
Finally, arbitration about a single step will be conducted on smart contract.
Represents an extension of single-phase verification game, which allows for a better utilization of computing resources.
Single phase verification game cross-compiles the whole ML inference code into the Fraud Proof VM instructions. This method is less efficient than the native execution (doesn't utilize the full potential of GPU/TPU acceleration and parallel processing). The Fraud Proof VM also has limited memory, which prevents loading of large models into the memory directly.
To address the issues above, multi-phase verification game introduces the following properties:
Semi-Native Execution With the multi-phase design, we only need to conduct the computation in the VM only in the final phase, resembling the single-phase protocol. For other phases, we have the flexibility to perform computations that lead to state transitions in the native environment, leveraging the capabilities of parallel processing in CPU, GPU, or even TPU. By reducing the reliance on the VM, we significantly minimize overhead, resulting in a remarkable enhancement in the execution performance of opML, almost akin to that of the native environment
Lazy Loading Design To optimize the memory usage and performance of the fraud proof VM, we implement a lazy loading technique. This means that we do not load all the data into the VM memory at once, but only the keys that identify each data item. When the VM needs to access a specific data item, it uses the key to fetch it from the external source and load it into the memory. Once the data item is no longer needed, it is swapped out of the memory to free up space for other data items. This way, we can handle large amounts of data without exceeding the memory capacity or compromising the efficiency of the VM.
https://github.com/ora-io/keras2circom
ZkML is a proving framework that leverages Zero-Knowledge proofs to prove the validity of ML inference result on-chain. Due to its private nature it can protect confidential data and model parameters during training and inference, thus addressing privacy issues and reducing blockchain's computational load.
Besides being production-ready, circomlib-ml has rich ecosystem:
ORA leverages opML for Onchain AI Oracle because it’s the most feasible solution on the market for running any-size AI model onchain. The comparison between opML and zkML can be viewed from the following perspectives:
Proof system: opML uses fraud proofs, while zkML uses zk proofs.
Security: opML uses crypto-economic based security, while zkML uses cryptography based security.
Finality: We can define the finalized point of zkML and opML as follows:
zkML: Zero-knowledge proof of ML inference is generated (and verified).
opML: Challenge period of ML inference is passed. With additional mechanisms, faster finality can be achieved in much shorter time than the challenge period.
Opp/AI combines both opML and zkML approaches to achieve scalability and privacy. It preserves privacy while being more efficient than zkML.
Compared to pure zkML, opp/ai has much better performance with the same privacy feature.
(Optimistic Machine Learning), invented and developed by ORA, introduces a groundbreaking approach to integrating machine learning with blockchain technology. By leveraging similar principle of optimistic rollups, opML ensures the validity of computations in a decentralized manner. This framework enhances transparency and fosters trust in machine learning inference by allowing for onchain verification of AI computation.
Detailed explanation of opML can be found in our .
Check out repository.
, built by ORA, is the first advanced zkML framework that is battle-tested. From by Ethereum Foundation ESP Grant Proposal [FY23-1290] on leading zkML frameworks, Keras2Circom and its underlying are proven to be performant than other frameworks.
by Ethereum Foundation
Performance: opML is much more performant, while zkML has long proof generation time and extremely high memory consumption (, , , , , ).
https://ora.io/app/stake
ORA Points Program Staking is designed to ensure the security of the upcoming fully decentralized AI Oracle network.
You can stake your assets, and receive ORA Points as reward by contributing to the decentralization of AI Oracle network secured by opML.
Currently, ORA Points Program Staking supports these assets to be staked in:
At the first phase, ORA Points Program Staking is having a cap / limit on the total staked assets.
ETH & LST Pool (ETH, stETH, STONE): 10,000 in ETH & LST
OLM Pool (OLM): 300,000,000 in OLM
2) Select the token you want to stake, and enter amount.
3) Press "CONFIRM", and sign the transaction in your wallet. You are receiving points once transaction is confirmed on blockchain.
2) Switch to "WITHDRAW" tab.
3) Select the staked token you want to withdraw, and enter amount.
4) Press "INITIATE WITHDRAW", and sign the transaction in your wallet. You initiate withdraw once transaction is confirmed on blockchain.
5) Switch to the second window of "WITHDRAW" tab by clicking the arrow button at the bottom right.
6) Press "COMPLETE WITHDRAW" when the assets are eligible to be fully withdrawn, and sign the transaction in your wallet. You complete withdraw once transaction is confirmed on blockchain.
Staking Points is measured based on the time-integrated amount staked in units of TOKEN * days.
Here's the formula for points of one asset being staked, with p
as total points awarded, t
as the staking time if you stake for n
days, c
as the constant of staking points for different assets, m
as the amount of staked asset.
The constant of staking points (equals to the points you get if you stake 1 unit of asset for 1 day):
ETH & LST Pool (ETH, stETH, STONE): 8. You will receive 8 points per ETH staked every 24 hours.
OLM Pool (OLM): 0.00024. You will receive 24 points per 10,000 OLM staked every 24 hours.
In simpler words:
You accumulate points over time when you stake into ORA Points Program Staking.
Points are updated and awarded every day.
You can stake multiple assets and earn points for all of them.
(Native ETH)
(Lido)
(StakeStone)
(OpenLM)
1) Go to ORA Staking Page: .
4) See your points in dashboard: . Please note your points will be distributed every day, instead of every hour.
1) Go to ORA Staking Page: .
ETH Pool (ETH, stETH, STONE)
10,000 in ETH & LST
8 Points with 1 ETH & LST Staked
OLM Pool (OLM)
300,000,000 in OLM
24 Points with 10,000 OLM Staked
An operation that brings together multiple elements
A TypeScript-like language for WebAssembly.
A type of middleware that performs operations without human control. Commonly used in blockchain due to smart contracts' inability to trigger automatic functions and DApps' need for periodic calls.
Computational Entity. The customizable and programmable software defined by developers and the underlying oracle network running those software.
A software for operating a node. Usually developed by the network's core developer team.
A time window for finalizing disagreement on computation. Usually takes weeks. Used in traditional oracle networks.
Refers to Ethereum's consensus algorithm.
An operation that rejects unwanted elements
A staking mechanism involves "fisherman" to check integrity of nodes and raise dispute, and "arbitrator" to decide dispute outcome.
An approach for querying data. Commonly used in the front-end of DApps.
A high-performance implementation of zk-SNARK by Electric Coin Co.
The former naming format of HyperOracle. Use HyperOracle.
A type of middleware that fetches and organizes data. Commonly used in blockchain due to blockchain data's unique storage model.
Initial Model Offering (IMO) is a mechanism for tokenizing an open-source AI model. Through its revenue sharing model, IMO fosters transparent and long-term contributions to any AI model.
The process works as follows:
IMO launches an ERC-20 token (more specifically, ERC-7641 Intrinsic RevShare Token) of any AI model to capture its long-term value.
Anyone who purchases the token becomes one of the owners of this AI model.
Token holders share revenue generated from onchain usage of the tokenized AI model.
A programming language for Web.
A delay after outputting execution result caused by proof generation or dispute period.
An operation that associates input elements with output elements.
Services or infrastructure needed in pipeline of development.
A computer running client.
Onchain AI Oracle (OAO) is an oracle system which provides verifiable AI inference to smart contracts.
The system consists of 2 parts:
Set of smart contracts - Any dapp can request AI inference by calling OAO smart contracts. Oracle nodes listen to the events emitted by the OAO smart contracts and execute AI inference. Upon the successful execution, the results are returned in the callback function.
On the (specific) blockchain / not on the (specific) blockchain. Usually refers to data or computation.
Optimistic Machine Learning (opML) is a machine learning proving framework. In uses game-theory to ensure the validity of AI inference results. The proving mechanism works similar to optimistic rollups approach.
Optimistic Privacy Preserving AI (opp/ai) is a machine learning proving framework. It combines cryptography and game-theory to ensure the validity of AI inference results.
Name of our project. Previously HyperOracle.
A component for processing data in DApp development. Can be input oracle (input off-chain data to on-chain), output oracle (output on-chain data to off-chain), and I/O oracle (output oracle, then input oracle).
Able to be customized and defined by code.
A process of producing zero-knowledge proof. Usually takes much longer than execution only.
A ZK proof that verifies other ZK proofs, for compressing more knowledge into a single proof.
A process that involves the burning of staked token for a node with bad behavior.
A required process that involves the depositing and locking of token for a new-entered node into network.
Codes that define and configs indexing computation of The Graph's indexer.
Complete alignment with the Subgraph specification and syntax.
Able to be verified easily with less computation resources than re-execution. One important quality of zero-knowledge proof.
A model for analyzing trust assumptions and security. See Vitalk's post trust model.
Able to trust without relying on third-party.
A Strongly-typed JavaScript.
Refers to correctness of computation.
A type of zero-knowledge proof that does not utilize its privacy feature, but succinctness property.
Able to be checked or proved to be correct.
A computation to check if proof is correct. If verification is passed, it means proven statement is correct and finalized.
A role that requires prover to convince with proof. Can be a person, a smart contract in other blockchains, a mobile client, or a web client.
A binary code format or programming language. Commonly used in Web.
A global P2P network linking three topologically heterogeneous networks (Ethereum, ORA, Storage Rollup) with zero-knowledge proof.
A cryptographic method for proving. Commonly used in blockchain for privacy and scaling solutions. Commonly misused for succinctness property only.
A commonly-used zero-knowledge proof cryptography.
A trustless automation CLE Standard with zk. Aka. ZK Automation
A zkVM with EVM instruction set as bytecode.
Zero-Knowledge Machine Learning (zkML) is a machine learning proving framework. In uses cryptography to ensure the validity of AI inference results.
A virtual machine with zk that generates zk proofs.
Reached when zk proof is verified, or dispute period is passed, and data becomes fully immutable and constant. See .
More information can be found on page.
Network of nodes - Nodes execute AI inference and return results back to the blockchain. Validity of the result is proven through one of the proving frameworks: , , .
For more information check page.
For more information check page.
For more information check page.
For more information check page.
Frequently Asked Questions
Neither.
Rebuilding and replacing.
We are the first and your last oracle.
Yes.
ORA is the verifiable oracle protocol.
Yes.
Simply put, ORA is a key network of the entire World Supercomputer network. In addition, World Supercomputer is also served by Ethereum as the consensus network and Storage Rollup as the storage network.
The main difference between modular blockchain (including L2 Rollup) and world computer architecture lies in their purpose: Modular blockchain is designed for creating a new blockchain by selecting modules (consensus, DA, settlement, and execution) to put together into a modular blockchain; while World Supercomputer is designed to establish a global decentralized computer/network by combining networks (base layer blockchain, storage network, computation network) into a world computer.
The generated content will be stored on decentralized storage network, eg. IPFS.
For IPFS, you can retrieve them with IPFS gateway with the given id from AI Oracle.
Normally, optimistic rollups choose 7 days as their challenge period.
As mentioned before, as long as the challenge time of optimistic fraud proof is shorter than proof generation of zk, fraud proofs are faster than zk proofs.
It needs to be noted that zkML for huge AI models are not possible, or the zk proof generation of zkML approach is much slower than opML approach.
You have multiple options for building ORA's onchain AI:
Privacy, because all models in opML needs to be public and open-source for network participants to challenge.
Here are some facts:
OAO fee = Model Fee (for LlaMA2 or Stable Diffusion) + Callback Fee (for node to submit inference result back to onchain) + Network Fee (aka. transaction fee of networks like Ethereum)
Callback fee and network fee may be higher when network is experiencing higher traffic.
Callback fee may be lower if you are using model such as Stable Diffusion, because the inference result will be shorter (just an IPFS hash, instead of long paragraphs in LLM).
With OAO fee's structure, callback fee usually takes up major portion of the overall fee, because storing data of inference result on Ethereum is expensive.
You can try use Stable Diffusion model with OAO on Ethereum mainnet (because the callback fee will be lower), or use OAO on other networks, for lower cost.
ORA is verifiable oracle protocol. ORA network looks like a "layer 2" on a typical blockchain network. However, it doesn't scale smart contracts' computation, but extends smart contracts' features. An actual rollup requires or .
Strictly speaking, nothing is trustless because you still have to trust some fundamental rules like Math or Cryptography. But we still use the term of trustless to demonstrate the trustworthiness, security, verifiability, and decentralization of our network. is still a good word that describes ORA.
The model will have to using deterministic inference (learn more from ), either using the , or move the model into our deterministic vm (recommended for better support).
For opML the challenge period can be shorter, because it is not a rollup, which involves a lot of financial operations and maintains a public ledger. When optimized, the challenge period can be like of or .
zkML of (the most battle-tested and performant zkML framework)
of (run huge model like LlaMA2-7B and Stable Diffusion now)
zk+opML with (futuristic onchain AI fuses zkML's privacy and opML's scalability)
We recommend you , because it is production-ready and out-of-the-box, with support to LlaMA2-7B and Stable Diffusion.
That is why we came up with to add privacy feature to opML.
Modulus Labs used zkML to bring GPT2-1B onchain with .
The zkML framework EZKL .
, zkML has >>1000 times overhead than pure computation, with being 1000 times, and .
According to , the average proving time of RISC Zero is of 173 seconds for Random Forest Classification.
You can also check out on our zkML framework with others.
Reach out in our .