Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Overview, Mission, Offerings
ORA provides chain-agnostic infrastructure that bridges the gap between AI and blockchain.
We empower developers with the tools necessary to build end-to-end trustless and decentralized applications enhanced by verifiable AI.
ORA is live today, offering the capability to verify and run inference on the largest and most sophisticated AI models with few limitations.
At ORA, we believe the intersection of AI and crypto unlocks entirely new use cases.
Our mission is to provide developers and organizations with accessible, scalable, and verifiable AI solutions that seamlessly integrate on-chain. By providing these tools and enabling the tokenization of AI models, we create a unified platform that not only advances decentralized AI applications but also funds open-source AI development.
AI Oracle is ORA's verifiable and decentralized AI oracle, enabling developers to integrate AI functionalities into their smart contracts seamlessly. The AI Oracle is supported by ORA's network of permissionless nodes running the TORA client and is secured by optimistic Machine Learning (opML).
Verifiable AI Inference: Bring AI computations onchain with assurance of correctness and trustlessness.
Developer-Friendly Integration: Build custom smart contracts that interact with the AI Oracle to utilize AI models.
Use Cases: From AI-generated NFTs and onchain games to prediction markets and content verification, ORA provides the tools necessary to build your unique application.
IMOs represent a new mechanism for tokenizing AI models onchain, promoting sustainable funding for open-source contributors while distributing revenue to tokenholders.
Tokenization of AI Models: Share ownership and profits of AI models through ERC-7641.
Sustainable Open-Source Funding: Encourage ongoing contributions and development in the open-source AI community.
Benefits to Tokenholders: Capture value from onchain revenue and AI-generated content assets.
ORA was previously named HyperOracle.
How can the ORA network be used?
Verifiable AI Computations: Use ORA's AI Oracle to obtain AI inference results that are verified onchain, eliminating the need to trust external APIs.
Enhanced Decision Making: Implement AI-driven logic in smart contracts for applications like automated trading strategies, dynamic NFTs, or adaptive DeFi protocols.
Verifiable AI Art and Media: Create and tokenize AI-generated art, music, or writing with guaranteed provenance using models like Stable Diffusion.
ERC7007 Tokens: Mint verifiable AI-generated content as NFTs, providing transparency and trust in the origin of digital collectibles.
Interactive Experiences: Develop games and virtual environments where AI generates dynamic content, enhancing user engagement with verifiable AI elements.
Initial Model Offerings (IMOs): Tokenize AI models using the ERC7641 standard to fund open-source AI development sustainably.
Community Ownership: Align incentives between developers and stakeholders by allowing token holders to share in the revenue generated by AI models.
Accelerated Innovation: Encourage continuous contributions to open-source AI projects by providing a financial ecosystem that rewards development.
Automated Governance: Utilize AI models to analyze proposals and assist in decision-making processes within DAOs.
Risk Management: Implement AI for real-time risk assessment in lending platforms or asset management protocols.
Optimized Yield Strategies: Use AI to enhance yield farming and liquidity provision, maximizing returns for DeFi users.
Prediction Markets: Employ AI for accurate forecasting and automated settlement, improving efficiency and trust in prediction markets.
Underwriting and Credit Scoring: Use verifiable AI models to assess borrower credibility, enabling undercollateralized lending solutions.
Onchain Insurance: Develop insurance products with AI-driven claim verification and risk assessment, streamlining the insurance process.
AI-Powered Wallets: Integrate AI for improved security features like fraud detection and intent recognition in crypto wallets.
Personalized Interactions: Offer users tailored experiences in decentralized applications through AI-driven personalization and recommendations.
Language and Accessibility Tools: Utilize AI for translation and accessibility, making blockchain technologies more inclusive globally.
Agentic Architectures: Create verifiable AI agents that can own assets, execute transactions, and interact autonomously with blockchain applications.
Verification Services for AI Agents: Ensure that AI agents operate correctly and securely through ORA's verification mechanisms.
New Economic Models: Explore innovative models where AI agents participate in economies, such as automated trading bots or virtual assistants.
OPML, opp/ai, IMO
The ORA team has been tirelessly publishing research and shipping developer tools, read about our key milestones here:
Introduced opML: A novel fraud-proof mechanism enabling the verification of large AI models directly on-chain. Read the research paper.
Invented opp/ai: The first combined opML and zkML framework, offering both the scalability of opML and the guarantees of zkML. Read the research paper.
Pioneered Verified Inference for Stable Diffusion: Achieved verified inference for Stable Diffusion on-chain, paving the way for bringing verifiable AI-generated content into the blockchain ecosystem.
Integrated the First Large Language Model (LLM) Onchain: Successfully brought LlaMA2-7B outputs on-chain, making advanced AI models accessible within blockchain applications.
Launched the First AI Model Tokenization: Introduced tokenizing AI models to fund open-source development, enabling the first tokenized model $OLM based on OpenLM through Initial Model Offerings (IMOs).
In this tutorial, we’ll explore advanced techniques for interacting with the AI Oracle. Specifically, we’ll dive into topics like Nested Inference, Batch Inference, and Data Availability (DA) Options, giving you a deeper understanding of these features and how to leverage them effectively.
A user can perform nested inference by initiating a second inference based on the result of the first inference within a smart contract. This action can be completed atomically and is not restricted to a two-step function.
Some of the use cases for a nested inference call include:
generating a prompt with LLM for AIGC (AI Generated Content) NFT
extracting data from a data set, then generate visual data with different models
adding transcript to a video, then translate it to different languages with different models
For demo purposes we built a farcaster frame that uses ORA's AI Oracle.
The idea of Nested Inference contract is to execute multiple inference requests in 1 transaction. We'll modify Prompt contract to support nested inference request. In our example, it will call Llama3 model first, then use inference result as the prompt to another request to StableDiffusion model.
The main goal of this tutorial is to understand what changes we need to make to Prompt contract in order to implement logic for various use cases.
modify CalculateAIResult
method to support multiple requests
modify aiOracleCallback
with the logic to handle second inference request
💡 When estimating gas cost for the callback, we should take both models into the consideration.
As we now have additional function parameter for second model id. Not that we encode and forward model2Id
as a callback data in aiOracle.requestCallback
call.
The main change here is the within "if" block. If the callback data (model2Id
) is returned, we want to execute second inference request to the AI Oracle.
Output from the first inference call, will be passed to second one. This allows for interesting use cases, where you can combine text-to-text (eg. Llama3) and text-to-image (eg. Stable-Diffusion) models.
If nested inference call is not successful the whole function will revert.
💡 When interacting with the contract from the client side, we need to pass cumulative fee (for both models), then for each inference call we need to pass part of that cumulative fee. This is why we are calling estimateFee
for model2Id
.
This is an example of contract interaction from Foundry testing environment. Note that we're estimating fee for both models and passing cumulative amount during the function call (we're passing slightly more to ensure that the call will execute if the gas price changes).
You can also check the full implementation of PromptNestedInference.
The Batch Inference feature enables sending multiple inference requests within a single transaction, reducing costs by saving on network fees and improving the user experience. This bulk processing allows for more efficient handling of requests and results, making it easier to manage the state of multiple queries simultaneously.
Some of the use cases might be:
AIGC NFT marketplace - creating a whole AIGC NFT collection with just one transaction, instead of creating those with many transactions
Apps that need to handle requests simultaneously - Good example would be a recommendation system or chatbot with high TPS
Model fee required for batch inference request is calculated by multiplying single model fee with batch size (batchSize * model.fee
). This fee covers the operational costs of running AI models, with a portion contributing to protocol revenue. For details on the required fees for each model, visit the References page.
Callback transaction fee is needed for AI Oracle to submit callback transaction. It's calculated by multiplying current gas price with callback gas limit for invoked model (gasPrice * callbackGasLimit
).
Request transaction fee is regular blockchain fee needed to request inference by invoking aiOracle.requestCallback
method.
Total fee is calculated as sum of Model fee, Callback transaction fee and Request transaction fee.
In order to initiate batch inference request, we will interact with requestBatchInference
method. This method takes additional batchSize parameter, which specifies the amount of requests to the AI Oracle.
Note that we'll need to pass more gas to cover AI Oracle callback execution, depending on the batchSize (check out Pricing). For this purpose we can implement estimateFeeBatch
function in our Prompt contract. This method will interact with estimateFeeBatch
method from AIOracle.sol.
Input for batch inference should be structured as a string representing an array of prompt and seed values.
Prompt - string value that is mandatory in order to prompt AI Oracle.
Seed – an optional numeric value that, when used, allows you to generate slightly varied responses for the same prompt.
Result of Batch inference call is a dot separated list of inference results.
This is the prompt for interacting with Stable Diffusion model:
Result is dot separated list of ipfs CIDs:
When performing batch inference with the AI Oracle, ensure the prompt is following the standard format. Below is a simple script for interacting with batch inference:
To test it, you need to:
create new javascript file
copy the script and add env variables
create and deploy prompt contract that supports batch inference
add values to batchInference_abi and batchInference_address
Prompt examples
That's it! With few simple changes to the Prompt contract we utilised batch inference feature. This allowed us to get multiple responses with only one transaction.
When interacting with the AI Oracle, it is crucial to allocate sufficient gas for the execution of the callback function. The gas consumed during the callback depends entirely on the implementation of the aiOracleCallback
method. Its logic directly impacts gas usage - the more complex the logic, the higher the gas consumption. Carefully design your aiOracleCallback
to balance functionality and gas efficiency.
To estimate the gas required for the aiOracleCallback
in a Foundry project:
Run the following command to generate a gas report:
Locate your Prompt contract in the report and check the gas usage for the aiOracleCallback
method.
Use the calculated gas amount to set the gas limit during the initialisation of the Prompt contract (in the constructor).
Alternatively you can use other tools like Hardhat or Tenderly.
We provided test script for gas limit estimation in the template repository.
To execute estimation run: forge test test/EstimateGasLimit.t.sol -vvvv . Next to the callback function, you can observe the gas amount needed for execution. eg. [1657555] Prompt::aiOracleCallback -> in this case amount we can set gas limit to 1700000
The above script estimates gas necessary for Stable-Diffusion callback. Note that result will always be constant size (ipfs CID).
To estimate gas limit for Llama3, you need to change modelId and result.
💡 The maximum length of Llama3 response is 200 words string, hence we will use this string size to estimate gas limit.
Build your dApp using ORA's AI Oracle
This tutorial will help you understand the structure of the AI Oracle, guide you through the process of building a simple Prompt contract that interacts with the ORA network.
If you prefer a video version of the tutorial, check it here.
Final version of the code can be found here.
Setup the development environment
Understand the project setup and template repository structure
Learn how to interact with the AI Oracle and build an AI powered smart contract
To follow this tutorial you need to have Foundry and git installed.
Clone template repository and install submodules
Move into the cloned repository
Copy .env.example, rename it to .env. We will need these env variables later for the deployment and testing. You can leave them empty for now.
Install foundry dependencies
At the beginning we need to import several dependencies which our smart contract will use.
IAIOracle - interface that defines a requestCallback
method that needs to be implemented in the Prompt contract
AIOracleCallbackReceiver - an abstract contract that contains an instance of AIOracle and implements a callback method that needs to be overridden in the Prompt contract
We'll start by implementing the constructor, which accepts the address of the deployed AIOracle contract.
Now let’s define a method that will interact with the AI Oracle. This method requires 2 parameters, id of the model and input prompt data. It also needs to be payable, because a user needs to pass the fee for the callback execution.
In the code above we do the following:
Convert input to bytes
Call the requestCallback function with the following parameters:
modelId: ID of the AI model in use.
input: User-provided prompt.
callbackAddress: The address of the contract that will receive AI Oracle's callback.
callbackGasLimit[modelId]: Maximum amount of that that can be spent on the callback, yet to be defined.
callbackData: Callback data that is used in the callback.
Next step is to define the mapping that keeps track of the callback gas limit for each model and set the initial values inside the constructor. We’ll also define a modifier so that only the contract owner can change these values.
💡 Gas Limit Estimation - It's important to define callbackGasLimit for each model to ensure that AIOracle will have enough funds to return the response! Read more on Callback gas limit Estimation page.
We want to store all the requests that occurred. For that we can create a data structure for the request data and a mapping between requestId and the request data.
In the code snippet above we added prompt, sender and the modelId to the request and also emitted an event.
Now that we implemented a method for interaction with the AI Oracle, let's define a callback that will be invoked by the AI Oracle after the computation of the result.
We've overridden the callback function from the AIOracleCallbackReceiver.sol
. It is important to use the modifier, so that only AI Oracle can callback into our contract.
Function flow:
First we check if the request with provided id exists. If it does we add the output value to the request.
Then we define prompts
mapping that stores all the prompts and outputs for each model that we use.
At the end we emit an event that the prompt has been updated.
Notice that this function takes callbackData as the last parameter. This parameter can be used to execute arbitrary logic during the callback. It is passed during requestCallback
call. In our simple example, we left it empty.
Finally let's add the method that will estimate the fee for the callback call.
With this, we've completed with the source code for our contract. The final version should look like this:
Add your PRIVATE_KEY
, RPC_URL
and ETHERSCAN_KEY
to .env file. Then source variables in the terminal.
Create a deployment script
Go to Reference page and find the OAO_PROXY address for the network you want to deploy to.
Then open script/Prompt.s.sol and add the following code:
Run the deployment script
Once the contract is deployed and verified, you can interact with it. Go to blockchain explorer for your chosen network (eg. Etherscan), and paste the address of the deployed Prompt contract.
Let's use Stable Diffusion model (id = 50) as an example.
First call estimateFee
method to calculate fee for the callback.
Request the AI inference from the AI Oracle by calling calculateAIResult method. Pass the model id and the prompt for the image generation. Remember to provide an estimated fee as a value for the transaction.
After the transaction is executed, and the AI Oracle calculates the result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ipfs). To check the image go to https://ipfs.io/ipfs/[Replace_your_CID].
Install the browser wallet if you haven't already (eg. Metamask)
Open your solidity development environment. We'll use Remix IDE.
Copy the contract along with necessary dependencies to Remix.
Choose the solidity compiler version and compile the contract to the bytecode
Deploy the compiled bytecode Once we compiled our contract, we can deploy it.
First go to the wallet and choose the blockchain network for the deployment.
To deploy the Prompt contract we need to provide the address of already deployed AIOracle contract. You can find this address on the reference page. We are looking for OAO_PROXY address.
Deploy the contract by signing the transaction in the wallet.
Once the contract is deployed, you can interact with it. Remix supports API for interaction, but you can also use blockchain explorers like Etherscan. Let's use Stable Diffusion model (id = 50).
In this tutorial we covered a step by step writing of a solidity smart contract that interacts with ORA's AI Oracle. Then we compiled and deployed our contract to the live network and interacted with it. In the next tutorial, we will extend the functionality of the Prompt contract to support AI generated NFT collections.
Source code of AI Oracle: https://github.com/ora-io/OAO
For supported models and deployment addresses, see Reference page.
For example integrations and ideas to build, see awesome-ora.
Check out video tutorial on interacting and building with AI oracle.
An AI Oracle powered by Optimistic Machine Learning (opML)
ORA's AI Oracle is a verifiable oracle protocol that allows developers to create smart contracts and applications that ingest verifiable machine learning (ML) inferences for uses on the blockchain.
AI Oracle is powered by Optimistic Machine Learning (opML). It enables a verifiable, transparent, and decentralized way to integrate advanced ML models like LLaMA 3 and Stable Diffusion into smart contracts.
It offers:
Scalability: Can run any ML model onchain without prohibitive overhead.
Efficiency: Reduces computational time and cost when compared to zkML.
Practicality: Can easily be integrated into applications by using existing blockchain infrastructure.
Verifiability: Leverages fraud proofs to provide applications and users with assured computational integrity.
AI Oracle comprises a set of smart contracts and off-chain components:
opML Contract: Handles fraud proofs and challenges to ensure on-chain verifiability of ML computations.
AIOracle Contract: Connects the opML node with on-chain callers to process ML requests and integrates various ML models.
User Contract: Customizable contracts that initiate AI inference requests and receive results from directly from AI Oracle.
ORA Nodes: Machines that run the Tora Client that interact with the ORA network. Nodes currently perform two functions: submit or validate inference results.
ORA provides chain-agnostic infrastructure that seamlessly connects AI and blockchain.
We equip developers with the essential tools to create decentralized, trustless applications powered by verifiable AI.
Unveiled by the ORA Foundation, $ORA is the token of ORA ecosystem, designed to drive advancements in blockchain intelligence through decentralized AI.
$ORA's utility includes access to IMO, node operations, governance, and more.
Ethereum
Base
Solana
Hyperliquid
$ORA is live and trading at:
Uniswap v3 Pool on Ethereum Mainnet: 0x4e4a4c4c46d3488ff35ff05a0233785a30f03ec4
Uniswap v3 Pool on Base Mainnet: 0x316f12517630903035a0e0b4d6e617593ee432ba
ORA/USDC Pool on Hyperliquid: 0xd7a5b9b760fd758d2e3f6f3ecd2ae5bb
ORA/SOL Pool on Raydium: BPYhCMNao2XG4UR771LurHrwUg3rcp76jmg4XFfAacvg
For more markets, including centralized exchanges, where you can access $ORA, visit $ORA's CoinMarketCap and $ORA's CoinGecko pages.
On EVM networks (Ethereum, Base...) and Solana, $ORA is OFT token based on LayerZero's OFT standard.
To bridge $ORA token, you can choose to use:
zero fee; additional token support
/
fast & economy mode; support for most networks
Stargate also supports $ORA on Arbitrum, and BNB Chain
local execution; technical skill required
/
Currently, bridging of $ORA between Hyperliquid is not supported.
Inspired by Vitalik Buterin’s vision of DAICO (a hybrid of DAO governance for transparency and ICO’s open fundraising), $ORA was launched with Decentralized AI Community Offering.
Total Supply: 333,333,333
Circulating Supply: 36,666,666.63
Token Distribution:
$ORA token is central to the ORA ecosystem, serving multiple roles including:
Access to IMO: $ORA grants ecosystem members the access to IMOs, funding the next state-of-the-art AI models and driving innovation in open-source AI.
Node Operator: By staking $ORA, operator can run decentralized nodes, supporting services like OAO and RMS while earning fees.
Governance: $ORA holders play a crucial role in shaping the future of the protocol through decentralized governance.
Others to be announced.
Learn more about $ORA and the ORA ecosystem:
ORA Foundation: https://foundation.ora.io
ORA Foundation's X: https://x.com/FoundationORA
The Optimistic Machine Learning (OPML) framework connects off-chain machine learning computations with smart contracts. Results are provably correct and can be disputed on-chain to ensure trust and accountability, enabling efficient AI-driven decision-making in decentralized applications. OPML is a core component of the AI Oracle, playing a critical role in validating inference results.
Initiate Request: The user contract sends an inference request to AI Oracle by calling the requestCallback
function.
opML Request: AI Oracle creates an opML request based on the user contract request.
Event Emission: AI Oracle emits a requestCallback
event collected by the opML node.
ML Inference: The opML node performs the AI model computation.
Result Submission: The opML node uploads the result on-chain.
Callback Execution: The result is dispatched to the user's smart contract via a callback function.
Challenge Window: Begins after the result is submitted on-chain (step 5 above).
Verification: opML validators or any network participant can check the results and challenge the output if it is incorrect.
Result Update: If a challenge is successful, the incorrect result is updated on-chain.
Finality: After the challenge period, the result is finalized onchain and made immutable.
Easy to use guides for non-developers
As a user, to interact with AI Oracle, you can:
Use the AI.ORA.IO frontend- view our video tutorial
Directly via the Prompt contract on Etherscan
In Prompt contract's Read Contract
section, call estimateFee
with specified modelId
.
In Prompt contract's Write Contract
section, call calculateAIResult
with fee (converted from wei to ether), prompt
, and modelId
.
In the AI Oracle contract, look for the new transaction that fulfills the request you sent.
In Prompt contract's Read Contract
section, call getAIResult
with previous modelId
and prompt
to view the AI inference result.
If you want to retrieve your historical AI inference (eg. AIGC image), you can find them on blockchain explorer:
Find your transaction for sending AI request, and enter the "Logs" tab. Example tx: https://etherscan.io/tx/0xfbfdb2efcee23197c5ea8487368a905385c84afdc465cab43bc1ad01da773404#eventlog
Access your requestId
in "Logs" tab
Example's case: "1928".
In OAO's smart contract, look for the Invoke Callback
transaction with the same requestId
.
Normally, this transaction will be around the same time as the one in step 1. To filter transactions by date, click "Advanced Filter" and then the button near "Age".
Find the transaction for AI inference result, and enter the "Logs" tab. Example tx: https://etherscan.io/tx/0xfbfdb2efcee23197c5ea8487368a905385c84afdc465cab43bc1ad01da773404#eventlog
Access output data. Example's case: "QmecBGR7dD7pRtY48FEKoeLVsmBTLwvdicWRkX9xz2NVvC" which is the IPFS hash that can be accessed with IPFS gateway)
To showcase how AI Oracle can enhance consumer-facing products, we introduce Fortune Teller.
Fortune Teller leverages the NestedInference smart contract, as discussed in the Advanced Usages of AI Oracle. This application aims to onboard new users to Web3 and demonstrate the capabilities of AI Oracle.
User Interaction: The application prompts users to answer specific questions posed by the Magic Wizard.
Fortune Telling: The Wizard casts its spells by requesting inferences from AI Oracle.
Response Generation: The Llama3 model generates a textual response, which is then used as a prompt for image creation via StableDiffusion.
NFT Minting: Users can mint AI-generated NFTs if they are satisfied with their fortune image.
The image on the right represents user's fortune result generated by AI Oracle:
Text: Generated by Llama3 (onchain tx)
Image: Generated by Stable Diffusion v3 (onchain tx)
Objective AI-Generated Insights: AI provides neutral and unbiased outputs, ensuring a fair experience across diverse use cases.
Immutable Onchain Data: Information generated is securely stored on the blockchain, making it tamper-proof and easily verifiable.
Transparent Data Generation: Utilizing opML, the entire generation process is transparent, fostering trust in the system across different applications.
💡 If you're interested in creating your own frame, check out our AI Oracle Frame Template repository. This template can help you bootstrap your application, but you'll need to modify it based on your specific use case.
We encourage you to develop your own applications. Explore ideas in the awesome-ora repository, whether it’s a Farcaster frame, a Telegram mini app, or any other client application that interacts with AI Oracle.
Quick Start Tutorial Guide
Tora Launcher is a multi-platform Tora client running as a desktop application.
Users with any level of experience can use it to start up and run their own Tora node, just like running a desktop wallet.
Currently, Tora Launcher supports MacOS, Linux, and Windows.
Releases can be found here: Download Tora Launcher.
This is the guide for MacOS version. If you are using other operating systems, please note there may be changes.
Go to releases page, find the release tagged with "Latest", and download the file ending with .dmg.
If you encounter issues when starting up the app, please follow this guide to verify the app on your device.
Double click the .dmg file you just downloaded, and follow the guide on your device to install.
After opening up the app, you will need to install the Tora docker image that includes the OpenLM model.
Click on "INSTALL TORA".
It is required to keep docker app open when installing. If you don't have docker open when going into this step, you will need to restart Tora app.
After installation, you will see the app's main screen.
Click on "SETTING".
Enter configurations to all required fields:
PRIVATE KEY: the private key that you use for the node that pays gas with sufficient balance for node operations on blockchain. Please note: it is recommended to use a separate wallet other than your daily used wallet.
MODEL: the AI model in Onchain AI Oracle that you want to run in your node.
NETWORK: the network that you submit validation results.
WEBSOCKET URL and JSON-RPC URL: the RPC endpoints that you use for getting blockchain data. Default RPCs are provided.
Scroll down and click on "START" to start your node.
If you are starting the node for the first time, you are required to set up password.
When node is running, you will be see the status on the top-right corner and "NODE" page.
You can also check out logs in "LOG" page.
For any question, please join our discord.
Running ORA nodes with the Tora Validator Client
Tora Validator is a validator client of the AI Oracle network, written in TypeScript.
As the first released version of ORA node, it validates AI inference results submitted by AI Oracle nodes.
To enhance client diversity and further decentralize the network, future updates will introduce additional types of nodes.
Tora Validator Node performs two main functions:
Validation: It validates AI inference results by executing AI computations within the opML framework.
Blockchain Interaction: The node interacts with ORA's AI Oracle smart contracts by monitoring AI requests and confirming validated inference results.
Tora Validator supports the OpenLM model (model ID: 13) on the Ethereum mainnet. The node runs this model and confirms its AI inference results on-chain, contributing to the overall security and trustworthiness of the network.
To encourage participation, Tora offers two options of client software for running the node:
Desktop App (Tora Launcher): A beginner-friendly approach for users who prefer a graphical interface.
CLI: An advanced option for those comfortable with command-line operations, offering more control and flexibility.
Bring Your Own Model
In this tutorial we explain how to integrate your own AI model with ORA's AI Oracle. We will start by looking at repository and trying to understand what's happening there. At the end we will showcase how works, by running a simple dispute game script inside a docker container.
Understand how to transform an AI model inference code and integrate it with ORA's AI Oracle
Execute a simple dispute game and understand the process of AI inference verification.
installed
Clone repository
Navigate to cloned repository
To install the required dependencies for your project, run the following command:
If there are some missing dependencies, make sure to install them in your Python environment.
First we need to train a DNN model using Pytorch. The training part is shown in examples/mnist/trainning/mnist.ipynb.
After the training model is saved at examples/mnist/models/mnist/mnist-small.state_dict.
Position to mnist folder
Convert python model into ggml format
In order to convert AI model written in Python to ggml format we are executing python script and providing a file that stores the model as a parameter to the script. The output is the binary file in ggml format. Note that the model is saved in big-endian, making it easy to process in the big-endian MIPS-32 VM.
Next step is to write inference code in go language. Then we will transform go binary into MIPS VM executable file.
Go supports compilation to MIPS. However, the generated executable is in ELF format. We'd like to get a pure sequence of MIPS instructions instead. To build a ML program in MIPS VM execute the following steps:
Navigate to the mnist_mips directory and build go inference code
Now that we compiled our AI model and inference code into MIPS VM executable code.
First we need to specify the operating system that runs inside our container. In this case we're using ubuntu:22.04.
Then we configure ssh keys, so that docker container can clone all the required repositories.
Position to the root directory inside docker container and clone opml repository along with its submodules.
Lastly, we tell docker to build executables and run the challenge script.
In order to successfully clone opml repository, you need to generate a new ssh key and add it to your Github account. Once it's generated, save the key in the local repository where Dockerfile is placed as id_rsa
. Then add the public key to your GitHub account.
Build the docker image
Run the local Ethereum node
In another terminal run the challenge script
After executing the steps above you should be able to see interactive challenge process in the console.
Script first deploys necessary contracts to the local node. Proposer opML node executes AI inference and then the challenger nodes can dispute it, if they think that the result is not valid. Challenger and proposer are interacting in order to find the differing step between their computations. Once the dispute step is found it's sent to the smart contract for the arbitration. If the challenge is successful the proposer node gets slashed.
In this tutorial we achieved the following:
converted our AI model from Python to ggml format
compiled AI inference code written in go to MIPS VM executable format
run the dispute game inside a docker container and understood the opML verification process
First call estimateFee
method to calculate fee for the callback.
Then request AI inference from the AI Oracle by calling `calculateAIResult` method. Pass the model id and the prompt for the image generation. Remember to provide estimated fee as a value for the transaction.
After the transaction is executed, and the AI Oracle calculates result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ipfs).
is a file format that consists of version number, followed by three components that define a large language model: the model's hyperparameters, its vocabulary, and its weights. Ggml allows for more efficient inference runs on CPU. We will now convert the Python model to ggml format by executing following steps:
Build script will compile go code and then run script that will transform compiled go code to the MIPS VM executable file.
We can test the dispute game process. We will use a from opml repository to showcase the whole verification flow.
For this part of the tutorial we will use , so make sure to have it installed.
Let's first check the content of the that we are using:
Then we need to install all the necessary dependencies in order to run .
🛠️
Build with
💻
Run a Node Tora Launcher
📚
Learn about Initial Model Offering (IMO)
Common Issues
Make sure you have downloaded the latest version of the Tora client: https://github.com/ora-io/tora-electron-releases/releases/.
Make sure you have the official installation of docker.
Optimistic Privacy-Preserving AI onchain
opp/ai (Optimistic Privacy-Preserving AI on Blockchain) is a novel framework that combines the efficiency of opML (Optimistic Machine Learning) with the privacy guarantees of zkML (Zero-Knowledge Machine Learning). By strategically partitioning machine learning (ML) models, opp/ai balances computational efficiency and data privacy, enabling secure and efficient AI services onchain.
The following is a comprehensive overview of opp/ai, including its architecture, technical components, and use cases.
Privacy Preservation: Protects sensitive data and proprietary models by leveraging Zero-Knowledge Proofs (ZKPs) for critical components.
Balanced Efficiency: Reduces zkML overhead by limiting its use to essential parts of the model while running the rest in opML.
Flexibility: Allows customization of the model partitioning to tailor privacy and efficiency based on application requirements.
Security: Combines cryptographic security from zkML with the economic security model of opML.
opp/ai divides the ML model into submodels based on privacy requirements:
zkML Submodels:
Components that handle sensitive data or proprietary algorithms.
Executed using Zero-Knowledge Proofs to ensure data and model confidentiality.
opML Submodels:
Components where efficiency is prioritized over privacy.
Executed using the optimistic approach of opML for maximum efficiency.
Example Partitioning:
An ML model M is partitioned into:
M_zk (zkML Submodel): Handles sensitive computations.
M_op (opML Submodel): Handles non-sensitive computations.
Zero-Knowledge Proofs for zkML Submodels:
The prover generates ZKPs for M_zk without revealing sensitive data.
Proofs are submitted to the blockchain and verified by validators.
Optimistic Execution for opML Submodels:
M_op is executed efficiently off-chain.
Results are submitted to the blockchain by decentralized service providers and are assumed correct unless challenged.
Integration Mechanism:
Outputs from M_zk are used as inputs for M_op and vice versa as necessary.
Ensures seamless integration of zkML and opML components.
opp/ai relies on the three major components of opML for opML submodels:
The FPVM is a specialized virtual machine designed for opML that can:
Execute and Verify Computation: Capable of performing individual computation steps to resolve disputes.
State Management: Uses Merkle trees to represent and manage computation states efficiently.
Onchain Arbitration: Runs on smart contracts, enabling the on-chain verification of disputed steps.
The ML engine in opML is tailored for both native execution and fraud-proof scenarios:
Dual Compilation: ML models are compiled twice:
Native Execution: Optimized for speed, utilizing multi-threading and GPU acceleration.
Fraud-Proof VM Instructions: Compiled into a machine-independent code for use in FPVM during disputes.
Determinism and Consistency:
Fixed-Point Arithmetic: Ensures consistent results across different environments.
Software-Based Floating-Point Libraries: Guarantees cross-platform determinism.
An efficient protocol for resolving disputes:
Bisection Protocol:
Iteratively narrows down the computation steps to find the exact point of disagreement.
Reduces the amount of data and computation needed for onchain verification.
Onchain Arbitration:
Only the minimal necessary computation (usually a single step) is performed onchain.
Ensures disputes are resolved fairly and efficiently.
In addition to the opML components above, opp/ai incorporates the following zkML components:
The prover generates ZKPs, such as zk-SNARKs, to demonstrate the validity of the ML inference.
The verifier smart contract efficiently validates the zero-knowledge proofs provided by the prover.
To achieve model privacy, the proofs of all zkML submodels should have their model inputs set as public inputs, while the model parameters or weights should be kept private or hashed. The workflow is as follows:
Model Partitioning:
The ML model is partitioned into zkML and opML submodels.
All submodels running in opML are published as a public model available to all potential submitters and challengers.
Request Initiation:
A user (requester) submits a request for an ML inference task.
Computation and Submission:
The prover receives the model input and computes the proof the zkML submodels, submitting them onchain for verifiers while also requesting to initiate opML service tasks for the opML submodels.
The submitter performs the opML service task, taking the result from the zkML submodels as input to the opML model and commits the results onchain.
Dispute Resolution:
The challengers validate the results and will start a dispute process with the submitter if they find any of them inaccurate. The dispute process will only apply to the specific opML submodel being challenged.
For opML submodels, disputes are resolved via the interactive dispute game using the FPVM.
For zkML submodels, the correctness is ensured by the cryptographic proofs.
Finalization:
Upon successful verification and resolution of disputes, the final result is confirmed on the blockchain.
In the context of preserving model privacy, the single prover trust assumption posits that only one trusted entity, the prover, has access to the model weights. All end users interact with this prover to generate zkML proofs without direct access to the model weights themselves.
This approach is designed to protect the confidentiality of the model while still enabling users to benefit from its predictive capabilities.
opp/ai acknowledges that attackers could attempt to reconstruct models through extensive queries due to the single prover trust assumption inherent to zkML.
In order to mitigate this, opp/ai can limit the number of allowed inferences or adjust the cost per inference to deter attacks.
In order to ensure user input privacy, zkML proof generation should occur on local devices where model weights are distributed. However, this lends the model to reverse engineering attacks.
Given this limitation of zkML, the opp/ai framework instead focuses on model parameter privacy rather than input privacy.
How does opp/ai differ from opML and zkML individually?
opp/ai integrates both opML and zkML to provide a balance between efficiency and privacy. It uses zkML for sensitive computations and opML for the rest, whereas opML focuses on efficiency without privacy, and zkML provides privacy but with higher computational costs.
What types of models are suitable for opp/ai?
Models that can be partitioned based on privacy requirements. This includes models where certain layers or components handle sensitive data or proprietary algorithms.
How do I decide which parts of my model should run in zkML vs. opML?
Analyze the model to identify components that require privacy (e.g., layers with proprietary weights or handling sensitive data). These should run in zkML. Components where privacy is not a concern can run in opML for efficiency.
What are the hardware requirements for opp/ai?
zkML Components: May require machines with substantial resources for proof generation, especially for larger models.
opML Components: Can run on standard PCs with CPU/GPU capabilities.
Is opp/ai practical for large-scale models?
While zkML components can be resource-intensive, opp/ai reduces the overhead by only applying zkML to critical parts of the model. This makes it more practical than using zkML for the entire model.
How does opp/ai handle input privacy?
Input privacy is challenging due to potential reverse-engineering attacks. opp/ai focuses on model privacy and acknowledges limitations in input privacy with current zkML technologies.
Can opp/ai be integrated with existing blockchain applications?
Yes, opp/ai is designed to be flexible and can be integrated into existing applications that require both efficient and privacy-preserving AI capabilities.
opML Documentation
Community and Support:
Discussion Forums: https://discord.gg/ora-io
Related Papers and Research:
opML: Optimistic Machine Learning on Blockchain: https://arxiv.org/pdf/2401.17555
opp/ai: Optimistic Privacy-Preserving AI on Blockchain: https://arxiv.org/pdf/2402.15006
For further questions or support, please reach out to the ORA team through the community channels.
Scalable machine learning (ML) inference onchain
opML (Optimistic Machine Learning) is an innovative framework that enables efficient and scalable machine learning (ML) inference directly on the blockchain. By leveraging an optimistic approach inspired by optimistic rollups (e.g., Optimism and Arbitrum), opML overcomes the computational limitations of blockchains, allowing for the execution of large ML models without sacrificing decentralization or security.
opML is currently available via ORA’s AI Oracle.
The following is a comprehensive overview of opML, including its architecture, technical components, and frequently asked questions.
High Efficiency: Computations are performed off-chain in optimized environments and only minimal data is processed on-chain during disputes.
Cost-Effectiveness: Reduces computational costs by avoiding the expensive proof generation required in Zero-Knowledge Machine Learning (zkML).
Decentralization: Maintains the decentralized ethos of blockchain by enabling onchain verification without relying on centralized servers.
Scalability: Capable of handling extensive computations that are impractical with traditional onchain methods.
Accessibility: Opens up advanced AI models and capabilities to decentralized applications.
opML combines Optimistic Execution with a Fraud-Proof Mechanism.
opML operates on the principle of optimistic execution, which assumes that the ML computations submitted are correct by default. This approach allows computations to be performed offchain in optimized environments, significantly reducing the computational burden on the blockchain.
Assumption of Correctness: Results are accepted unless proven otherwise.
Efficiency: By not requiring immediate verification, the system avoids unnecessary computational overhead.
To ensure security and correctness, opML incorporates the following fraud-proof mechanism:
Submission of Results: The service provider (submitter) performs the ML computation offchain and submits the result to the blockchain.
Verification Period: Validators (or challengers) have a predefined period (challenge period) to verify the correctness of the submitted result.
Dispute Resolution:
If a validator detects an incorrect result, they initiate an Interactive Dispute Game.
The dispute game efficiently pinpoints the exact computation step where the error occurred.
On-Chain Verification: Only the disputed computation step is verified on-chain using the Fraud Proof Virtual Machine (FPVM), minimizing resource usage.
Finalization: If no disputes are raised during the challenge period, or after disputes are resolved, the result is finalized on the blockchain.
Deterministic ML: Randomness and variability in floating-point computations results in inconsistency of ML results. To address this, opML fixes the random seed so that outputs can be verified across multiple instances of the model. ****
Separated Execution from Proving: opML compiles the source code twice for native execution and for the fraud-proof VM instructions. This allows for computations to be performed in optimized native environments ensuring fast execution while simultaneously proving based on machine-independent code.
Lazy Loading Design: Loads only necessary data into the FPVM during the dispute phase to overcome memory limitations of the VM.
Attention Challenge: Validators are randomly selected to verify computations. Failing to respond when selected results in penalties, motivating active participation.
Economic Security: Validators and submitters stake tokens to participate. Dishonest behavior results in penalties (loss of stake). By staking tokens, both submitters and validators have economic incentives to behave honestly.
The FPVM is a specialized virtual machine designed for opML that can:
Execute and Verify Computation Steps: Capable of performing individual computation steps to resolve disputes.
State Management: Uses Merkle trees to represent and manage computation states efficiently.
Onchain Arbitration: Runs on smart contracts, enabling the on-chain verification of disputed steps.
The ML engine in opML is tailored for both native execution and fraud-proof scenarios:
Dual Compilation: ML models are compiled twice:
Native Execution: Optimized for speed, utilizing multi-threading and GPU acceleration.
Fraud-Proof VM Instructions: Compiled into a machine-independent code for use in FPVM during disputes.
Determinism and Consistency:
Fixed-Point Arithmetic: Ensures consistent results across different environments.
Software-Based Floating-Point Libraries: Guarantees cross-platform determinism.
An efficient protocol for resolving disputes:
Bisection Protocol:
Iteratively narrows down the computation steps to find the exact point of disagreement.
Reduces the amount of data and computation needed for onchain verification.
Onchain Arbitration:
Only the minimal necessary computation (usually a single step) is performed onchain.
Ensures disputes are resolved fairly and efficiently.
Request Initiation: A user (requester) submits a request for an ML inference task.
Computation and Submission:
The submitter performs the computation off-chain using the optimized ML engine.
Results are submitted to the blockchain.
Challenge Period:
Validators verify the correctness of the result.
If no disputes are raised within the predefined period, the result is accepted.
Dispute Resolution (if necessary):
A validator initiates the dispute game if an incorrect result is detected.
The exact computation step in dispute is identified.
On-chain verification is performed using the FPVM.
Finalization: The result is finalized after successful verification or dispute resolution.
Definition: The system is secure as long as at least one validator is honest.
Implications:
Even if multiple validators are malicious, a single honest validator can ensure correctness.
Encourages decentralization and wide participation to strengthen security.
Verifier Dilemma:
Validators may choose not to verify computations to save resources, potentially allowing incorrect results.
Our Solution:
Implementing the Attention Challenge mechanism detailed above encourages validators to participate.
Economic incentives ensure that the cost of cheating outweighs potential gains.
What makes opML different from zkML?
opML focuses on efficiency and scalability by using an optimistic approach without zero-knowledge proofs. zkML provides strong privacy guarantees but is resource-intensive and less practical for large models. The use of opp/ai enables the use of opML with privacy guarantees.
How does opML ensure security without zero-knowledge proofs?
opML relies on a fraud-proof mechanism and the AnyTrust assumption. Validators can challenge incorrect results, and economic incentives discourage dishonest behavior.
What happens if no validators challenge an incorrect result?
The economic incentives and attention challenge mechanism are designed to ensure validators participate. However, if all validators are dishonest or fail to challenge, the system could accept incorrect results, highlighting the importance of validator participation.
Can opML handle private data or proprietary models?
opML assumes that data and models are not sensitive. For applications requiring privacy, consider using opp/ai, which integrates opML with zkML components for privacy preservation.
What are the hardware requirements for running opML?
opML is designed to run on standard PCs with CPU/GPU capabilities. No specialized hardware is required.
How does the dispute resolution process impact performance?
Disputes are rare and only involve verifying minimal computation steps on-chain. The impact on performance is minimal compared to the overall efficiency gains. The dispute resolution process can also be optimized by zkfp.
Is opML open-source?
Yes, opML is open-sourced. You can access the repository athttps://github.com/ora-io/opml. New versions will shortly be released.
opML GitHub Repository: https://github.com/ora-io/opml
opML Paper: https://arxiv.org/pdf/2401.17555
Issue Tracker: https://github.com/ora-io/opml/issues
For any further questions or support, feel free to reach out to the ORA team through the community channels.
RMS (Resilient Model Services) is an AI service designed to provide computation for all scenarios, ensuring resilient AI computation through the power of opML.
The first stage of RMS includes ORA's AI API service that integrates seamlessly with existing AI frameworks.
API service allows developers to call upon most popular AI models for AI inference like chat completions, image generation, and more, with all interactions being decentralized and verifiable.
AI Agents within RMS can operate with high autonomy with verifiability, ensuring their operations are secure and transparent onchain.
Learn more about ORA's vision in Agent in opAgent's manifesto.
DeFAI, or Decentralized Finance + Artificial Intelligence, aims to abstract the complexities of DeFi with AI.
RMS will support DeFAI by providing AI services like automated trading strategies, risk management, and personal financial agent, all verifiable and decentralized.
Web3 games on RMS can leverage AI for dynamic gameplay, NPC interactions, and real-time decision-making, all while ensuring that game logic and outcomes are verifiable on the blockchain.
✅ Stage 1 - API Service
The initial release of RMS includes the API service for AI computations, allowing developers to integrate with models like those from DeepSeek, Meta-Llama, Google, MistralAI, Qwen, and others. This stage focuses on proving the concept of decentralized and verifiable AI computation.
Expansion of Supported Models:
Continual addition of new models and functionalities, enhancing the variety of services available through RMS.
Integration with Onchain Protocols:
Further integration with onchain protocols to explore and implement more complex use cases such as DeFAI.
Enhanced Framework Integration:
Developing more tools and adaption to existing libaries (eg. Unity for Web3 game developers) to utilize RMS service in compatible ways.
At the core of RMS is opML, a framework invented by ORA that brings verifiability and decentralization to AI computation. Using opML provides benefits of verifiability, resilience, flexibility, and transparency.
RMS is part of ORA’s commitment to providing decentralized, verifiable AI solutions.
More info will be available soon.
ERC-7641: Intrinsic RevShare Token
ERC-7641 is an extension of the standard ERC-20 token, designed to integrate a seamless revenue-sharing mechanism directly into the token's functionality. It allows token holders to claim and redeem a proportional share of a communal revenue pool, enhancing the token's utility and promoting active participation in the ecosystem.
ERC-7641 is a core component of the Initial Model Offering (IMO).
Revenue Sharing for Token Holders: Provides a standardized method for distributing revenue among token holders, rewarding them for their participation and investment.
Funding for Projects: Enables any project, including open-source initiatives, to tokenize their revenue streams, fostering sustainable development and community engagement.
Flexibility and Fairness: Maintains the essential attributes of ERC-20 tokens while introducing mechanisms for fair revenue distribution and deflationary economics through token burning.
Claimable Revenue: Token holders can periodically claim a portion of the revenue pool based on their token holdings at specific snapshots.
Snapshot Functionality: Snapshots capture token balances and claimable revenue at certain points in time, ensuring accurate and fair distribution.
Proportional Distribution: Revenue between snapshots is distributed proportionally to token holders, with a specified sharing ratio determining the portion allocated for claims.
Token Burning: Holders can burn their tokens to redeem a proportional share of the revenue pool, contributing to a deflationary model.
Redeemable Value: The value redeemed upon burning is calculated based on the holder's share of the total token supply and the total redeemable revenue in the pool.
Deflationary Economics: Reduces the total token supply over time, encouraging long-term holding.
The ERC-7641 standard introduces several interfaces to facilitate its functionality:
IERC7641
Defines the primary functions for revenue sharing and token burning:
claimableRevenue(address account, uint256 snapshotId)
: Calculates the amount of ETH claimable by a token holder at a specific snapshot.
claim(uint256 snapshotId)
: Allows a token holder to claim their share of ETH based on a snapshot.
snapshot()
: Creates a new snapshot of token balances and claimable revenue, returning a unique snapshot ID.
redeemableOnBurn(uint256 amount)
: Calculates the amount of ETH redeemable upon burning a specified amount of tokens.
burn(uint256 amount)
: Burns a specified amount of tokens and redeems the corresponding share of the revenue pool.
IERC7641AltRevToken
An optional extension to support revenue sharing with alternative ERC-20 tokens beyond ETH:
claimableERC20(address account, uint256 snapshotId, address token)
: Calculates the amount of a specific ERC-20 token claimable by a holder at a snapshot.
redeemableERC20OnBurn(uint256 amount, address token)
: Calculates the amount of a specific ERC-20 token redeemable upon burning tokens.
ERC-7641 enables open-source projects to tokenize their revenue streams, providing sustainable funding and incentivizing contributions. By distributing Intrinsic RevShare tokens to contributors, projects can reflect involvement and share revenue proportionally.
Combining ERC-7641 with the concept of an Initial Model Offering (IMO), open-source developers can conduct fundraising for new AI model development and create ongoing communities for their projects. Token holders benefit from revenue generated by the AI models, fostering a collaborative and sustainable ecosystem.
Reference Implementation: GitHub Repository
ERC-7641 Proposal: Ethereum Improvement Proposal
Related Concepts:
ERC-7007: Verifiable AI-Generated Content Token
ORA API is an integral part of ORA's Resilient Model Services (RMS).
The ORA API serves as the gateway for developers to interact with RMS, providing a decentralized, verifiable, and resilient platform for AI computations.
Advantages of ORA API are the best support for the most AI models in crypto; competitive pricing and predictable onchain cost; verifiable AI inference service; OpenAI compatibility...
Before using the ORA API, developers need to:
Obtain API Key: Register for an ORA API key through developer portal for authenticating requests.
Set Up Environment: Ensure you have Python installed if you're using the SDK, or a tool like cURL for direct API calls.
POST
https://api.ora.io/v1/chat/completions
Generate text responses from ORA API AI models.
Headers
Content-Type
application/json
Authorization
Bearer <ORA_API_KEY>
Body
model
string
Name of AI model
messages
array
Content for AI model to process
POST
https://api.ora.io/v1/images/generations
Generate images based on text prompts.
Headers
Content-Type
application/json
Authorization
Bearer <ORA_API_KEY>
Body
model
string
Name of AI model
prompt
string
Prompt for AI model to process
steps
integer
Number of diffusion steps AI model will take during generation
n
integer
number of images to generate based on prompt
The generated images are stored on IPFS.
ORA API is designed to be compatible with the OpenAI SDK, facilitating a smooth transition for developers already familiar with it:
ORA API supports a variety of open-source models, all verifiable through opML:
Error Handling: Always implement error handling to manage API response errors or timeouts.
Rate Limiting: Be aware of and respect rate limits to avoid service disruptions.
Security: Never expose your API key in client-side code. Use server-side calls or secure environment variables.
Initial Model Offering Platform, along with research.ora.io is the initiative to empower the development and funding of innovative AI models, to achieve blockchain intelligence with open and collaborative AI research.
The IMO process of each proposal is divided into two key stages:
Deposit Stage: Participants contribute $ORA tokens to their chosen IMO proposal.
Claiming Stage: Contributors to successful proposals can claim their allocated IMO tokens and any remaining $ORA tokens, if applicable.
ORA ensures that every IMO launch is decentralized and transparent, requiring $ORA as the primary deposit for allocation.
Review the IMO Proposal: Examine the proposal's technology and tokenomics.
Deposit Funds: Contribute $ORA or $ETH (converted to $ORA by the protocol) to secure an allocation and support the IMO proposal.
Wait for the Raising Period: Allow the raising period to conclude, during which funds are collected.
Claim Tokens or Refunds:
If the raising target is met or exceeded:
Claim IMO tokens and any un-utilized $ORA (for oversubscription: Allocation for IMO Tokens Per User = (Total Raising Target $ORA / Total Actual Raised $ORA) × Deposited $ORA from User).
If the raising target is not met:
Claim your original $ORA deposit in full.
During Deposit Stage, participants deposit $ORA tokens into the IMO proposal of their choice.
Once deposited, $ORA is locked until the raising period finishes, at which point the proposal is classified as either successful or failed based on expected raising amount and actual raised amount.
The following Claiming Stage will allow participants to retrieve their allocations or refunds based on the proposal's outcome.
During the Claiming Stage, participants of successful proposals can claim their $ORA tokens and $IMO tokens based on different conditions.
If the total $ORA raised exactly meets the target, participants receive solely IMO tokens based on the pricing specified in the proposal.
If the proposal is oversubscribed (raised amount exceeds the target), participants receive IMO tokens proportionally to their deposited $ORA.
For example: Target = 1M $ORA; Actual Raised = 2M $ORA; Participants receive IMO tokens for 50% (= 1M / 2M) of their deposit, and the remaining 50% $ORA is refunded.
No IMO tokens are issued for failed proposals. Participants are refunded the full amount of their deposited $ORA.
Q: What happens if a proposal is oversubscribed?
If the total $ORA raised exceeds the target, participants receive IMO tokens proportionally to their deposit. Any unconverted $ORA is refunded during the Claiming Stage.
Q: Can I deposit with $ETH during the Deposit Stage?
Yes, when depositing with $ETH, your $ETH will be auto-converted into $ORA by the protocol to secure your allocation. To maximize your deposit, using tools like Mev Blocker can help on MEV protection. In refund or oversubscription conditions, you will be able to claim $ORA that is converted from $ETH.
Q: Can I withdraw my $ORA during the Deposit Stage?
No, once deposited, $ORA remains locked until the proposal's outcome is determined.
Q: If I deposit with $ETH, what do I claim in claiming stage?
In refund or oversubscription conditions, you will be able to claim $ORA that is auto converted from $ETH in deposit stage by the protocol.
Q: What happens if a proposal fails to meet its raising target?
Participants will receive a full refund of their deposited $ORA. No IMO tokens will be issued for failed proposals.
Q: Are there any transaction fees involved?
Yes, every onchain interaction (e.g., depositing, claiming) requires Ethereum network transaction fees. Participants should ensure they have sufficient ETH for these fees.
Q: Can I participate in multiple IMO proposals?
Yes, participants can deposit $ORA into multiple proposals. Each deposit is managed independently based on the outcome of the respective proposal.
Q: How are the prices of IMO tokens determined?
The price of IMO tokens is specified in the proposal before the Deposit Stage begins. This price is fixed and forms the basis for allocation.
Q: Is there a limit to how much $ORA I can deposit?
Deposit limits, if applicable, are defined in the specific IMO proposal. If not specified, it means there's no limit on how much $ORA one can deposit. Participants should review the proposal details before depositing.
Q: Can rules vary between IMOs?
Yes, rules may vary depending on the specific IMO proposal. Participants should carefully review the details of each proposal to understand its terms.
This document is intended for informational purposes only. ORA reserves the right to modify or update the rules, processes, calculations, and any other process associated with the IMO platform at any time to ensure fairness and functionality. By participating in the IMO process, you acknowledge that the final interpretation and implementation of all rules rest solely with ORA.
Supported AI Models and Deployment Addresses of ORA's AI Oracle.
Model ID
11
Deployed Network
Ethereum & Ethereum Sepolia, Optimism & Optimism Sepolia, Arbitrum & Arbitrum Sepolia, Manta & Manta Sepolia, Linea, Base, Mantle, Polygon PoS
Fee
Mainnet: 0.0003 ETH / 3 MATIC / 3 MNT Testnet: 0.01 ETH
Usage
Model ID
13
Deployed Network
Ethereum & Ethereum Sepolia
Fee
Mainnet: 0.0003 ETH Testnet: 0.01 ETH
Usage
Model ID
14
Deployed Network
Arbitrum & Arbitrum Sepolia, Ethereum Sepolia
Fee
Mainnet: 0.0003 ETH Testnet: 0.01 ETH
Usage
Note
Input exceeds 7000 characters won't receive callback.
Model ID
15
Deployed Network
Arbitrum & Arbitrum Sepolia, Ethereum Sepolia
Fee
Mainnet: 0.0003 ETH Testnet: 0.01 ETH
Usage
Model ID
50
Deployed Network
Ethereum & Ethereum Sepolia, Optimism & Optimism Sepolia, Arbitrum & Arbitrum Sepolia, Manta & Manta Sepolia, Linea, Base, Mantle, Polygon PoS
Fee
Mainnet: 0.0003 ETH / 3 MATIC / 3 MNT Testnet: 0.01 ETH
Usage
Model ID
503
Deployed Network
Ethereum & Ethereum Sepolia, Optimism & Optimism Sepolia, Arbitrum & Arbitrum Sepolia, Manta & Manta Sepolia, Linea, Base, Mantle, Polygon PoS
Fee
Mainnet: 0.0003 ETH / 3 MATIC / 3 MNT Testnet: 0.01 ETH
Usage
Prompt and SimplePrompt are both example smart contracts interacted with AI Oracle.
For simpler application scenarios (eg. Prompt Engineering based AI like GPTs), you can directly use Prompt or SimplePrompt.
SimplePrompt saves gas by only emitting the event without storing historical data.
Ethereum Mainnet
Deprecated contracts: AIOracle, Prompt.
Optimism Mainnet
Optimism Sepolia
Arbitrum Mainnet
Arbitrum Sepolia Testnet
Manta Mainnet
Manta Sepolia Testnet
Linea Mainnet
Verifiable AI-Generated Content Token
ERC-7007 is an extension of the ERC-721 non-fungible token (NFT) standard, specifically designed to accommodate verifiable AI-generated content on the Ethereum blockchain.
It can be paired with the Initial Model Offering (IMO) in order to create inference assets that benefit model token holders.
ERC-7007 is a token standard that enables the creation and management of NFTs representing AI-generated content. It introduces a set of interfaces and events that allow for:
Verification of AI-Generated Content: Ensures that the content associated with an NFT is generated from a specific AI model using a given input (prompt).
Integration with zkML and opML: Supports Zero-Knowledge Machine Learning (zkML) and Optimistic Machine Learning (opML) to verify the correctness of outputs.
Unique Token Identification: Assigns token IDs based on the prompt, ensuring that each NFT is uniquely associated with its input and output.
As AI-generated content becomes more prevalent, there is a need for a standardized way to represent and verify this content on the blockchain. ERC-7007 addresses several challenges:
Verification of Authenticity: Allows users to verify that the AI-generated content is produced by a specific model with a specific input.
Support for AI Model Authors and Content Creators: Provides a way for creators to monetize their AI models and content without risking open-source devaluation.
Enabling Revenue-Sharing Mechanisms: Facilitates fair compensation for AI model authors and content creators through secure and verifiable NFTs.
AI Model: A pre-trained machine learning model that generates output based on a given input (prompt).
zkML / opML:
zkML (Zero-Knowledge Machine Learning): Uses zero-knowledge proofs to verify that the content was generated correctly without revealing the model's details.
opML (Optimistic Machine Learning): Uses fraud-proofs where correctness is ensured through a challenge period and and onchain dispute resolution.
AIGC-NFT smart contract: An onchain contract compliant with ERC-7007, with full ERC-721 functionalities
Verifier smart contract: implements a verify function. When given an inference task and its corresponding ZK proof or opML finalization, it returns the verification result as a boolean.
Model Publication: The AI model and its zero-knowledge proof verifier are published on Ethereum.
Inference Task: A user submits an input (prompt) and initiates an inference task.
Proof Generation: Nodes maintaining the model generate the output and a zero-knowledge proof of the inference.
Verification: The proof is verified on-chain using the verifier contract.
Minting NFT: The user owns the NFT representing the AI-generated content, secured by the proof.
Model Publication: The AI model is published on Ethereum using the ORA network.
Inference Task: A user submits an input (prompt) and initiates an inference task.
Result Submission: Nodes perform the inference and submit the output.
Challenge Period: Other nodes can challenge the result within a predefined period.
Finalization: After the challenge period, if unchallenged or successfully defended, the user can verify ownership and update the AIGC data as needed.
ERC-7007 introduces a standardized JSON schema for NFT metadata, including:
name: Identifies the asset.
description: Describes the asset.
image: URI pointing to an image representing the asset.
prompt: The input used to generate the AI content.
aigc_type: Type of AI-generated content (image, video, audio, etc.).
aigc_data: URI pointing to the AI-generated content.
proof_type: Indicates whether it's a validity proof (zkML) or fraud proof (opML).
An artist uses an AI model to generate artwork based on user-specified prompts:
Model Deployment: The artist deploys the AI model on Ethereum using the ORA Network.
Content Generation: Users submit prompts during the minting process to generate unique artworks.
Verification: Each piece is verified to confirm it was generated by the specific AI model using the provided prompt.
NFT Minting: The verified art is minted as an ERC-7007 NFT, ensuring authenticity and uniqueness. Proceeds from the mint flow back to the artist.
A musician tokenizes an AI model using an Initial Model Offering (IMO) that generates music tracks:
Model Publication: The AI music model is tokenized using the IMO process.
User Interaction: Fans submit prompts or themes to generate personalized music tracks.
Verification and Minting: Tracks are verified and minted as NFTs using ERC-7007 and ORA’s AI Oracle.
Ownership and Royalties: The musician can implement royalty mechanisms for each NFT sale. Additionally, the use of the IMO allows token holders to receive a corresponding portion of the mint fees.
ERC-7007 Proposal: Ethereum Improvement Proposal
Related Concepts:
opML and opp/AI
The FPVM is a specialized virtual machine designed for opML that can:
Execute and Verify Computation: Capable of performing individual computation steps to resolve disputes.
State Management: Uses Merkle trees to represent and manage computation states efficiently.
Onchain Arbitration: Runs on smart contracts, enabling the on-chain verification of disputed steps.
The AI Settlement Oracle is the first AI-powered truth machine, leveraging verifiable AI to resolve and settle factual questions onchain. It offers a trustless, autonomous system for information resolution, eliminating human error and manipulation.
Accuracy & Fairness: Immune to economic manipulation and herd behaviour typical of traditional systems.
Trustless Settlement: Ensures unbiased outcomes for any factual query.
The AI Settlement Oracle contains two parts of interaction:
Parse Context: The initial interaction processes a question by analyzing web sources and summarizing them into a coherent "truth context." This provides a reliable foundation for further reasoning.
Onchain Settlement: In the second interaction, the truth context and helper prompts are integrated into an input for the AI Oracle. The Oracle uses this input to generate a verified outcome, enabling decentralized and transparent truth settlement onchain.
Quick Start guides to the CLI
Please confirm that your computer has Docker installed and ensure there is a smooth network connection. The installation, operation, and upgrade of Tora validator client will be based on Docker.
To run a Tora validator client, your computer must have:
Any Operating System with Docker installed.
1-core CPU that exceeds 8 GB RAM (12GB recommended). This configuration is sufficient to run an OpenLM model server.
💡 Set up a new fresh wallet to run the node. Do not use "everyday wallet".
💡 Please ensure that there is sufficient balance in the wallet used for confirmation
, which can be used to pay for transaction.Gas
.
💡 Make sure to set appropriate RAM limit in the docker engine configuration.
Download the latest version of Tora validator client at releases page.
Enter the project directory tora-docker-compose
and write the configuration file .env
.
For the parameters that need to be configured in .env
, please refer to the next subsection.
You need to modify the following four environment variables.
MAINNET_WSS
MAINNET_HTTP
SEPOLIA_WSS
SEPOLIA_HTTP
The Tora validator client does not currently provide a default public RPC. To create your own API key, please register an provider account like the Alchemy.
Currently, running multiple networks in one node is not supported.
The Tora validator client currently supports Ethereum Mainnet and Sepolia.
You need to modify the CONFIRM_CHAINS
environment variable to select the chain you want to listen to.
This variable requires a string type, but you can listen to multiple chains at the same time by using a JSON-formatted string.
The following examples are feasible:
You need to modify the PRIV_KEY environment variable to set the wallet used for confirmation.
Please ensure that the wallet has enough balance to pay transaction gas.
For example:
The TORA validator currently supports OpenLM model.
You can modify the CONFIRM_MODELS
environment variable to select the model you want to validate.
This variable requires a number Array type encoded in JSON.
The following example is feasible:
When CONFIRM_USE_CROSSCHECK
is set to true
, the node will periodically check if any events are missed.
If set to false
, it will not check for misses.
The following example is feasible:
This environment variable is the time interval between each crosscheck.
This environment variable is used to set how many blocks crosscheck checks each time. It is recommended to set a relative large block number that takes at least 1 hr to generate.
💡 There won’t be any overlapping or missing blocks as long as this value is greater than 0, because crosschecks always start from the last checkpoint height
Confirm Task will be processed within a certain period of time. After the timeout, the node will skip this Task.
This environment variable can be set to determine how often crosscheck should be performed. note that this can be safely set to a small number, since the crosschecker also waits for fulfilling the BATCH_BLOCKS_COUNT
for each check.
This environment variable will affect the log level. The default is production, changing it to other values will generate more detailed logs.
This environment variable determines the lifespan of redis connections, which is set to one day by default.
This will start 4 docker containers:
ora-tora
ora-redis
ora-openlm
diun
If Docker Desktop is installed, you can view the logs by following docker desktop - container - tora-olm
Otherwise, you can use the following command line to view.
Initialization
After executing the command "docker compose up", the Tora validator starts the initialization process.
When you see [confirm] [+] RPC Server running on port 5001
, it means that the Tora validator has completed initialization and everything is ready.
Initiate an OLM model request
To test that the node is behaving correctly, request inference from OpenLM model through https://www.ora.io/app/opml/models.
inference
Shortly after the request is initiated, the Tora validator will log receive event in tx ...
. This indicates that the Tora validator has received the request and started to perform inference.
confirm done
If “confirmed at txhash: 0x....” appears in docker logs, it means that node running inference was successful and confirmed successfully.
By default node uses GPU and caching mechanism, which completes confirmation process within just a few seconds. Running on CPU without cache would typically require about 15 minutes.
💡 The inference results will be cached, and the same prompt will no longer be inferred in the future. Once a prompt has been cached, the process will be completed instantly.
Currently, there are multiple models running in ORA. This error indicates that the modelId of the detected OAO event is not in your support_models.
Since Tora validator currently only supports OpenLM (model id: 13), this error message can be temporarily ignored.
This error indicates that model inference off-chain is functioning, but the on-chain contract call fails. This may be caused by various reasons and debugging needs to be done in conjunction with specific error information.
If you encounter this error, please record the failed txHash
and contact community developers.
This error indicates that the on-chain model has been upgraded, and the old version of the model is deprecated.
Please use the docker images rm
command and docker container rm
command to delete the existing Tora validator node, download the latest version of Tora validator again, and start the node.
ORA leverages opML for Onchain AI Oracle because it’s the most feasible solution on the market for running any-size AI model onchain. The comparison between opML and zkML can be viewed from the following perspectives:
Proof system: opML uses fraud proofs, while zkML uses zk proofs.
Security: opML uses crypto-economic based security, while zkML uses cryptography based security.
Finality: We can define the finalized point of zkML and opML as follows:
zkML: Zero-knowledge proof of ML inference is generated (and verified).
opML: Challenge period of ML inference is passed. With additional mechanisms, faster finality can be achieved in much shorter time than the challenge period.
Opp/AI combines both opML and zkML approaches to achieve scalability and privacy. It preserves privacy while being more efficient than zkML.
Compared to pure zkML, opp/ai has much better performance with the same privacy feature.
Initial Model Offering: Tokenising AI Models Onchain
Open-source AI models are vital for fostering innovation and collaboration. However, sustaining these models financially remains a significant challenge. As a result, the AI industry is currently dominated by closed-source, for-profit companies.
To address this, ORA introduces the Initial Model Offering (IMO), a mechanism that tokenizes AI models onchain. IMOs enable open-source AI model developers to secure funding and build in public, fostering a more collaborative and innovative AI ecosystem.
An Initial Model Offering (IMO) involves the issuance of an ERC-7641 Intrinsic RevShare Token for an AI model to represent its long-term value. By holding IMO tokens, anyone can become a part of the AI model's ecosystem and contribute to its development and sustainability while capturing a percentage of fees incurred by users during inference.
For AI Model Developers: Provides sustainable funding for open-source development.
For Ecosystems: Aligns values and incentives for distribution and ongoing contribution.
For Token Holders: Enables capturing the value of AI models onchain through revenue sharing.
Tokenization: An AI model is tokenized into ERC-7641 tokens.
Value Generation: The AI model generates fees through onchain usage and AI-generated content.
Value Distribution: The token mechanism facilitates the distribution of fees within the ecosystem transparently and proportionally.
Governance: Token holders can assist with model ecosystem development through voting on funding and grants.
An IMO combines two core components:
Verifiable Onchain AI Models
ERC-7641 Intrinsic RevShare Token
To ensure the token mechanics are a closed loop, the model uses verifiable onchain inferences. This is achieved through:
opML (Optimistic Machine Learning): Leverages fraud proofs to ensure model correctness and overcome the computational limitations of blockchains.
AI Oracle: The practical implementation of opML, enabling AI models to operate onchain through the ORA Network.
By adopting opML and our AI Oracle, IMOs ensure practicality and performance in running AI models onchain while simultaneously providing a mechanism to capture fees when that model’s outputs are ingested by other smart contracts.
The ERC-7641 Intrinsic RevShare Token standardizes interaction and revenue sharing logic with the AI model. Participants may benefit from mechanisms such as:
Model Usage Fees: Each onchain use of the AI model involves a fee managed through the token mechanism. This fee is distributed proportionally to token holders.
AI-Generated Content Revenue: Outputs generated by the AI model (e.g. NFTs from an image generator model) may carry royalty and mint fees. By using additional standards such as ERC-7007, these fees can also be captured and distributed to token holders.
ETHDenver Talk: AI Isn't Evil and Smart Contract Proves It
Sample ERC-7641 Implementation: GitHub Repository
ERC-7007: Verifiable AI-Generated Content Token
ERC-7641: Intrinsic RevShare Token
For questions or contributions to the IMO ecosystem, please contact ORA.
ORA Points Program is focused on identifying contributors who engage with ORA ecosystem.
Points will act as an indicator for us to identify the most committed for their early support of ORA’s ecosystem as it evolves.
Track your points on this .
ORA's AI Oracle enables verifiable, decentralized AI inference onchain. You can interact with to obtain verifiable inference results.
To complete:
Go to the on the website
Select a model
Use the AI Oracle on mainnets (eg. Ethereum mainnet, Optimism mainnet…). Follow .
Sign transaction for calling AI request
3-6 points are earned per use
Point earning activities include:
i) Join using referral code
Fill in your referral code in “Enter Code” section
5 Points are earned for this task
ii) Refer a friend:
Locate your referral code on Dashboard
Distribute referral code to others
10% of referee points are earned
By operating an ORA node, you can actively participate in the ORA Network.
Follow the guide to set up and run the node to confirm AI inference result on Ethereum mainnet
Users are rewarded points for running each AI inference and validating on Ethereum mainnet
Read more about our work
Linktree:
Website:
Twitter:
opp/ai Paper:
opML Paper:
ERC-7007 and AIGC NFTs:
ERC 7641 for IMOs:
We hold regular World Supercomputer Summits. Check out the recaps:
For more details, check out World Supercomputer's .
Early contributors: Staking
Staking is designed to ensure the security of the fully decentralized AI Oracle network in the future. Our point system is being used reward early adopters.
*: Points are proportional to your staked amount.
You will still be able to participate in OLM's revenue sharing if OLM is staked.
Currently, ORA Points Program Staking supports these assets to be staked in:
(Native ETH)
(Lido)
(StakeStone)
(OpenLM)
(Tether)
(USDC)
At the first phase, ORA Points Program Staking is having a cap / limit on the total staked assets.
ETH & LST Pool (ETH, stETH, STONE): 10,000 in ETH & LST
OLM Flexible Pool (OLM): 300,000,000 in OLM
OLM Locked Pool (OLM): 300,000,000 in OLM
Stablecoin Pool (USDT & USDC): 1,000,000 in USDT & USDC
Staking is disabled from the frontend since points program's season 1 has ended.
2) Choose the pool you want to stake, click on the arrow to open staking page.
3) Select the token you want to stake, and enter amount.
4) Press "CONFIRM", and sign the transaction in your wallet. Points are received once the transaction is confirmed on blockchain.
Migrating is disabled from the frontend since points program's season 1 has ended.
If you have already staked into OLM (Flexible), and want to migrate these staked tokens to OLM (Locked), you do not need to withdraw then re-stake. You may directly migrate from OLM (Flexible) to OLM (Locked).
2) Choose OLM (Flexible) which you've already staked, click on the arrow to open staking page.
3) Switch to "CHANGE POOL" tab. Select the staked token you want to migrate, and enter amount.
4) Press "CHANGE POOL", and sign the transaction in your wallet. You will migrate your asset to OLM (Locked) once transaction is confirmed on blockchain.
3) Switch to "WITHDRAW" tab. Select the staked token you want to withdraw, and enter amount.
4) Press "INITIATE UNSTAKE", and sign the transaction in your wallet. You initiate unstake once transaction is confirmed on blockchain.
For ensuring the security of ORA Points Program Staking, all funds will go through 1 day before being eligible to be fully withdrawn.
5) Switch to the second window of "WITHDRAW" tab by clicking the "WITHDRAW" button at the top right.
6) Press "CLAIM WITHDRAW" when the assets are eligible to be fully withdrawn at given time, and sign the transaction in your wallet. You complete withdraw once transaction is confirmed on blockchain.
Staking Points is measured based on the time-integrated amount staked in units of TOKEN * days.
Here's the formula for points of one asset being staked, with p
as total points awarded, t
as the staking time if you stake for n
days, c
as the constant of staking points for different assets, m
as the amount of staked asset.
The constant of staking points (equals to the points you get if you stake 1 unit of asset for 1 day):
ETH & LST Pool (ETH, stETH, STONE): 8. You will receive 8n points per n ETH / LST staked every 24 hours.
OLM Flexible Pool (OLM): 0.0024. You will receive 24n points per 10,000n OLM staked every 24 hours.
OLM Locked Pool (OLM): 0.0048. You will receive 48n points per 10,000n OLM staked every 24 hours.
Stablecoin Pool (OLM): 0.001. You will receive 3n points per 3,000n USDT / USDC staked every 24 hours.
Note:
Points are updated and awarded every day.
You may stake multiple assets to earn points.
An operation that brings together multiple elements
Onchain AI Oracle (OAO) has been renamed to AI Oracle. You may still see this term in some of our documentation and code base.
AI Oracle is ORA's verifiable and decentralized AI oracle, enabling developers to integrate AI functionalities into their smart contracts seamlessly. The AI Oracle is supported by ORA's network of permissionless nodes running the TORA client and is secured by optimistic Machine Learning (opML).
The system consists of 2 parts:
Network of nodes - Nodes execute AI inference and return results back to the blockchain. Validity of the result is proven through one of the proving frameworks: , .
Set of smart contracts - Any dapp can request AI inference by calling AI Oracle smart contracts. Oracle nodes listen to the events emitted by the AI Oracle smart contracts and execute AI inference. Upon the successful execution, the results are returned in the callback function.
A TypeScript-like language for WebAssembly.
A type of middleware that performs operations without human control. Commonly used in blockchain due to smart contracts' inability to trigger automatic functions and DApps' need for periodic calls.
Computational Entity. The customizable and programmable software defined by developers and the underlying oracle network running those software.
A software for operating a node. Usually developed by the network's core developer team.
A time window for finalizing disagreement on computation. Usually takes weeks. Used in traditional oracle networks.
Refers to Ethereum's consensus algorithm.
An operation that rejects unwanted elements
A staking mechanism involves "fisherman" to check integrity of nodes and raise dispute, and "arbitrator" to decide dispute outcome.
An approach for querying data. Commonly used in the front-end of DApps.
A high-performance implementation of zk-SNARK by Electric Coin Co.
Hyper Oracle has been rebranded to ORA.
A type of middleware that fetches and organizes data. Commonly used in blockchain due to blockchain data's unique storage model.
Initial Model Offering (IMO) is a mechanism for tokenizing an open-source AI model. Through its revenue sharing model, IMO fosters transparent and long-term contributions to any AI model.
The process works as follows:
IMO launches an ERC-20 token (more specifically, ERC-7641 Intrinsic RevShare Token) of any AI model to capture its long-term value.
Anyone who purchases the token becomes one of the owners of this AI model.
Token holders share revenue generated from onchain usage of the tokenized AI model.
A programming language for Web.
A delay after outputting execution result caused by proof generation or dispute period.
An operation that associates input elements with output elements.
Services or infrastructure needed in pipeline of development.
A computer running a client.
On the (specific) blockchain / not on the (specific) blockchain. Usually refers to data or computation.
Name of our project. Previously HyperOracle.
A component for processing data in DApp development. Can be input oracle (input off-chain data to on-chain), output oracle (output on-chain data to off-chain), and I/O oracle (output oracle, then input oracle).
Able to be customized and defined by code.
A process of producing zero-knowledge proof. Usually takes much longer than execution only.
A ZK proof that verifies other ZK proofs, for compressing more knowledge into a single proof.
A process that involves the burning of staked token for a node with bad behavior.
A required process that involves the depositing and locking of token for a new-entered node into network.
Codes that define and configs indexing computation of The Graph's indexer.
Complete alignment with the Subgraph specification and syntax.
Able to be verified easily with less computation resources than re-execution. One important quality of zero-knowledge proof.
A model for analyzing trust assumptions and security. See Vitalk's post trust model.
Able to trust without relying on third-party.
A Strongly-typed JavaScript.
Refers to correctness of computation.
A type of zero-knowledge proof that does not utilize its privacy feature, but succinctness property.
Able to be checked or proved to be correct.
A computation to check if proof is correct. If verification is passed, it means proven statement is correct and finalized.
A role that requires prover to convince with proof. Can be a person, a smart contract in other blockchains, a mobile client, or a web client.
A binary code format or programming language. Commonly used in Web.
A global P2P network linking three topologically heterogeneous networks (Ethereum, ORA, Storage Rollup) with zero-knowledge proof.
A cryptographic method for proving. Commonly used in blockchain for privacy and scaling solutions. Commonly misused for succinctness property only.
A commonly-used zero-knowledge proof cryptography.
A trustless automation CLE Standard with zk. Aka. ZK Automation
A zkVM with EVM instruction set as bytecode.
A virtual machine with zk that generates zk proofs.
RMS is built on the foundation of opML (Optimistic Machine Learning), a pioneering approach by ORA that integrates the verifiability, decentralization, and resilience of blockchain technology with computation of AI.
Learn more in .
The architecture of Resilient Model Services (RMS) leverages blockchain for an onchain service contract, which operates akin to a gRPC protobuf by defining services and their callable methods.
Providers must register and stake on this contract to offer services. When a user requests an AI service, they use a verifiable blockchain light client like Helios to fetch a list of available providers from the blockchain.
The user then selects multiple providers randomly and sends offchain transactions to them, which mimic smart contract function calls but are designated for cloud service execution through a unique chain ID.
Providers perform the computation locally, sign, and return the results to the user. The user verifies these results for consistency. If consistent, the process is complete; if not, discrepancies trigger an on-chain arbitration where providers must prove the correctness of their computations.
Optimistic Machine Learning (opML) is the backbone of RMS, designed to integrate the benefits of blockchain technology with AI computation:
Verifiability: opML leverages blockchain for the correctness of AI computations.
Decentralization: The computational load is distributed across decentralized service providers.
Resilience: By requiring only one honest provider to guarantee correct service, RMS operates under a minority trust assumption, making it resilient against malicious actors.
Transparency: All interactions and results are logged on the blockchain, providing transparency.
N-to-M Model: Unlike centralized services, RMS scales by distributing load across many providers, potentially offering better performance during peak times.
Cost-Effectiveness: Decentralized resources can be cheaper than centralized cloud services, with efficient use of decentralized GPUs or CPUs.
For privacy-sensitive computations, FHE can be integrated, allowing users to send encrypted data for processing, with results verifiable without decrypting the data during arbitration.
ORA Terms of Use
Last Modified: Sep 23, 2024
These Terms of Use ("Terms") describe your rights and obligations while using the ORA website, software, services, and other offerings (collectively, "Services"). Please read these Terms carefully and in their entirety, as they include important information about your legal rights, remedies, and obligations. These Terms, together with our Privacy Policy constitute an agreement between the user ("you") and ORA ("ORA", "we", "us", "our").
Acceptance of Terms: By accessing or using our Services, you acknowledge that you have read, understand, and agree to be bound by these Terms. We may, in our sole discretion, revise the Terms from time to time with the new terms taking effect on the "Last updated" date. By continuing to use our Services after the changes become effective, you agree to be bound by the revised Terms. We may operate additional programs or services which require separate or additional terms. In such cases, you agree to be further bound by the terms specific to the additional program or service, and such terms shall control to the extent there is a conflict with these Terms. IF YOU DO NOT AGREE TO THESE TERMS, YOU MAY NOT ACCESS OR USE OUR SERVICES.
Privacy: How we collect, use, and disclose information, including personal information, that you provide to us via the Services is described in our Privacy Policy found at .
User Account Registration: You do not need to create an account (“User Account”) to access the Services; however, we may, from time to time, restrict access to certain features, parts, or content of the Services, or the entire Service to only those users who have created a User Account. You agree to provide accurate, current, and complete information during the registration process. If you create a User Account you are entirely responsible for the security and confidentiality of that account, including your password to access the User Account. Furthermore, you are entirely responsible for any and all activities that occur under your User Account. You agree to immediately notify us of any unauthorized use of your User Account or any other breach of your User Account's security of which you become aware. You are responsible for taking precautions and providing security measures best suited for your situation and intended use of the Services. By creating a User Account, you agree to receive service-related electronic communications from us. You agree that any notices, agreements, disclosures, or other communications that we send to you electronically will satisfy any legal communication requirements, including, but not limited to, that such communications be in writing. You may opt-out of receiving promotional emails that you have previously opted-in to at any time by following the instructions to unsubscribe, as provided therein.
Use of Services: You agree to use our Services only for purposes that are permitted by these Terms and any applicable law, regulation, or generally accepted practices or guidelines in the relevant jurisdictions. You agree you will not engage or attempt to engage in any improper uses of the Services, including, but not limited to: (i) violating any applicable federal, state, local, or international law or regulation (including, without limitation, any laws regarding the export of data or software to and from the US or other countries); (ii) storing the Services (including pages of the Service) on a server or other storage device connected to a network or creating a database by systematically downloading and storing any data from the Services (other than for page caching); (iii) removing or changing any content of the Services or attempting to circumvent the security or interfere with the proper working of the Services or any servers on which it is hosted; (iv) creating links to the Services from any other website without our prior written consent; (v) using any robot, data mining, screen scraping, spider, website search/retrieval application, or other manual or automatic device or process to retrieve, index, “data mine”, or in any way reproduce or circumvent the navigational structure or presentation of the Services or their contents; (vi) posting, distributing, or reproducing in any way any copyrighted material, trademarks, or other proprietary information without obtaining the prior written consent of the owner of such proprietary rights; (vii) interfering with or disrupting the Services or the servers or networks connected to the Services; and (viii) modifying, copying, reproducing, duplicating, adapting, sublicensing, translating, selling, reverse engineering, deciphering, decompiling, or otherwise disassembling any portion of the Services or any software used on or for the Services or causing others to do so.
ORA Points: Earning and Redeeming ORA Points: You may accrue ORA Points through use of the Services, including by participating in various activities that we may specify from time to time. The specific ORA Points you may accrue may be different depending on factors including the Services you use, and your eligibility. We may specify how users can redeem ORA Points, including to access special features or exclusive areas of the Services at our sole discretion. Eligibility: To participate in the ORA Points Program, your User Account must be continuously in good standing. If your User Account is not in good standing for any reason, or we determine in our sole discretion that you are abusing, gaming, or misusing the ORA Points Program or have otherwise violated our Terms or any of the terms, agreements, and policies incorporated by reference, you may be ineligible to accrue ORA Points, and you may forfeit any ORA Points previously earned or accrued. Certain User Accounts may not be eligible to participate in the ORA Points Program. We may update or change eligibility criteria, restrictions, and requirements at any time. Reversals or Failure to Accrue: The accrual of ORA Points may be reversed where we determine, in our sole discretion, that the conditions required for accrual of ORA Points were not satisfied. If this results in a negative ORA Points balance, we may subtract a proportionate number of ORA Points from existing ORA Points balances or any future ORA Points that would otherwise accrue to your User Account. Ownership of Points: ORA Points are not your property. Any ORA Points accrued by users will be reflected in your User Account in accordance with these Terms. ORA Points may only be redeemed by the User Account owner, and no user is entitled to use any accrued ORA Points other than as approved by the User Account owner. ORA Points are not transferable to any third party or any other User Account not owned by you, unless otherwise specified by ORA. Any non- permitted attempt to transfer ORA Points is void and any ORA Points that you attempt to transfer may be forfeited.
Suspension and Termination of Access to the Services: We may, at /our option and in our sole discretion, suspend, restrict or terminate your access to the Services if: (i) we are so required by a facially valid subpoena, court order or binding order of any government authority; (ii) we reasonably suspect you of using the Services in connection with any prohibited uses stated in Section 4 of these Terms; (iii) your use of the Services is subject to any pending litigation, investigation or government proceeding and/or we, in our sole discretion, perceive a heightened risk of legal or regulatory non-compliance associated with your activity; (iv) any of our service partners are unable to support your use thereof; (v) you take any action that we deems in our sole discretion as circumventing our controls and/or safeguards; or (vi) you breach these Terms. If we suspend or terminate your use of the Services for any reason, we will provide you with notice of our actions, unless a court order or other legal process prevents or prohibits us from providing you with such notice. You acknowledge that our decision to take certain actions, including limiting access to or suspending your access to the Services, may be based on confidential criteria that are essential to our risk management and/or security protocols. You agree that we are under no obligation to disclose the details of our risk management and/or security procedures to you.
Intellectual Property: All photos, videos, images, and text on the Services, together with the design and layout of the Services (“Content”) are copyrighted and may not be used without our written permission. All intellectual property rights in the Services and in any Content of the Services (including, but not limited to, text, graphics, design, layout, software, photographs, and other images, videos, sound, trademarks, and logos) are owned by us or our licensors. Except as expressly set forth herein, nothing in the Terms gives you any rights in respect of any intellectual property owned by us or our licensors and you acknowledge that you do not acquire any ownership rights by downloading or using the Services. The Services and its Content, features, and functionality are and will remain the exclusive property of ORA. Our Services are protected by copyright, trademark, and other laws of both the United States and foreign countries. These Terms grant you a limited and non-exclusive right to use the Services. Except as indicated otherwise herein or in any additional terms or conditions, you may not reproduce, distribute, modify, create derivative works of, publicly display, publicly perform, republish, download, store, transmit or otherwise exploit any of the Content on our Services. You are expressly prohibited from: (i) modifying or making copies of any Content from the Services; (ii) using any illustrations, photographs, video or audio sequences or any graphics available through the Services separately from the accompanying text; (iii) deleting or altering any copyright, trademark, or other proprietary rights notices from copies of materials available through the Services; and (iv) uploading Content to the Services which violates the intellectual property rights of others. Anything you send to us through the Services, email, or other means may be used by us for any purpose. By submitting any Content via our Services, you acknowledge and agree that we retain all right, title, and interest, including all intellectual property rights, in and to the Content and any enhancements, modifications, or derivative works thereof. By submitting material and/or Content to us through the Services, email, or other means, you irrevocably transfer and assign to ORA, and forever waive, and agree never to assert, any copyrights, moral rights, or other rights that you may have in such material and/or Content. We are free to use, without obligation of any kind, any ideas, concepts, techniques, know-how, materials, and/or Content contained in any communication you send to us or to the Services for any purpose whatsoever including, without limitation, the right to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, perform, and display such ideas, concepts, techniques, know-how, materials, and/or Content. This paragraph shall not apply to your personal information, which is defined in and governed by the Privacy Policy.
Early-Stage Service and Disclaimer of Warranties: Please be aware that our Services are in the early stages of development. As such, they may be subject to stability issues and intermittent downtime. We are continuously working to improve the stability and functionality of our Services, but we cannot guarantee uninterrupted service. YOU ACKNOWLEDGE THAT YOUR USE OF THE SERVICES IS AT YOUR SOLE RISK (INCLUDING BUT NOT LIMITED TO ANY DAMAGE TO YOUR COMPUTER SYSTEM, LOSS OF DATA, DAMAGE RESULTING FROM RELIANCE ON THE SERVICES, OR OTHER DAMAGES THAT RESULT FROM OBTAINING ANY CONTENT FROM THE SERVICES INCLUDING COMPUTER VIRUSES). TO THE EXTENT PERMITTED BY LAW, ORA, ITS SUBSIDIARIES, AFFILIATES, LICENSORS, SERVICE PROVIDERS, CONTENT PROVIDERS, EMPLOYEES, AGENTS, OFFICERS, AND DIRECTORS (COLLECTIVELY, THE “ORA PARTIES”) PROVIDE THE SERVICES, AND ALL CONTENT CONTAINED THEREIN,, “AS IS,” “AS AVAILABLE,” AND “WITH ALL FAULTS,” WITHOUT WARRANTY OF ANY KIND, AND SPECIFICALLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, TITLE, CUSTOM, TRADE, QUIET ENJOYMENT, SYSTEM INTEGRATION, AND FREEDOM FROM COMPUTER VIRUSES. NO INFORMATION PROVIDED VIA THE SERVICES SHALL CREATE ANY WARRANTY. WITHOUT LIMITING THE GENERALITY OF THE FOREGOING, THE ORAPARTIES MAKE NO WARRANTY, REPRESENTATION, COVENANT, OR GUARANTEE WHATSOEVER, EXPRESS OR IMPLIED: (i) AS TO THE VALUE, QUALITY, TIMELINESS, USEFULNESS, RELIABILITY, SECURITY, SUITABILITY, ACCURACY, TRUTHFULNESS, OR COMPLETENESS OF THE SERVICES; (ii) THAT THE SERVICES WILL OPERATE UNINTERRUPTED OR ERROR-FREE; (iii) THAT THE SERVICES WILL MEET YOUR NEEDS OR EXPECTATIONS; (iv) AS TO THE QUALITY OR VALUE OF ANY OF ORA’S PRODUCTS, SERVICES, CONTENT, INFORMATION, OR OTHER MATERIAL YOU PURCHASE OR OBTAIN VIA THE SERVICES; (v) THAT ANY ERRORS PERTAINING TO THE SERVICES WILL BE CORRECTED. THE ORA PARTIES DO NOT WARRANT THAT YOUR USE OF THE SERVICES IS LAWFUL IN ANY PARTICULAR JURISDICTION, AND THE ORA PARTIES SPECIFICALLY DISCLAIM SUCH WARRANTIES. BY ACCESSING OR USING THE SERVICES YOU REPRESENT AND WARRANT THAT YOUR ACTIVITIES ARE LAWFUL IN EVERY JURISDICTION WHERE YOU ACCESS OR USE THE SERVICES. SOME JURISDICTIONS DO NOT ALLOW THE DISCLAIMER OF IMPLIED OR OTHER WARRANTIES, SO THE ABOVE DISCLAIMER MAY NOT APPLY TO YOU TO THE EXTENT SUCH JURISDICTION’S LAW IS APPLICABLE TO YOU AND THESE TERMS. 9. Limitation of Liability UNDER NO CIRCUMSTANCES WILL THE ORA PARTIES BE LIABLE TO YOU FOR ANY LOSS OR DAMAGES OF ANY KIND (INCLUDING, WITHOUT LIMITATION, FOR ANY DIRECT, INDIRECT, ECONOMIC, EXEMPLARY, SPECIAL, PUNITIVE, INCIDENTAL OR CONSEQUENTIAL LOSSES OR DAMAGES) THAT ARE DIRECTLY OR INDIRECTLY RELATED TO: (A) THE SERVICES; (B) YOUR USE OF, INABILITY TO USE, OR THE PERFORMANCE OF THE SERVICES; (C) ANY ACTION TAKEN IN CONNECTION WITH AN INVESTIGATION BY THE ORA PARTIES OR LAW ENFORCEMENT AUTHORITIES REGARDING YOUR OR ANY OTHER PARTY'S USE OF THE SERVICES; (D) ANY ACTION TAKEN IN CONNECTION WITH COPYRIGHT OR OTHER INTELLECTUAL PROPERTY OWNERS; (E) ANY ERRORS OR OMISSIONS IN THE OPERATION OF THE SERVICES; OR (F) ANY DAMAGE TO ANY USER'S COMPUTER, MOBILE DEVICE, OR OTHER EQUIPMENT OR TECHNOLOGY INCLUDING, WITHOUT LIMITATION, DAMAGE FROM ANY SECURITY BREACH OR FROM ANY VIRUS, BUGS, TAMPERING, FRAUD, ERROR, OMISSION, INTERRUPTION, DEFECT, DELAY IN OPERATION OR TRANSMISSION, COMPUTER LINE OR NETWORK FAILURE OR ANY OTHER TECHNICAL OR OTHER MALFUNCTION, INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOST PROFITS, LOSS OF GOODWILL, LOSS OF DATA, WORK STOPPAGE, ACCURACY OF RESULTS, OR COMPUTER FAILURE OR MALFUNCTION, EVEN IF FORESEEABLE OR EVEN IF THE ORA PARTIES HAVE BEEN ADVISED OF OR SHOULD HAVE KNOWN OF THE POSSIBILITY OF SUCH DAMAGES, WHETHER IN AN ACTION OF CONTRACT, STRICT LIABILITY OR TORT (INCLUDING, WITHOUT LIMITATION, WHETHER CAUSED IN WHOLE OR IN PART BY NEGLIGENCE, ACTS OF GOD, TELECOMMUNICATIONS FAILURE, OR THEFT OR DESTRUCTION OF THE SERVICE). IN NO EVENT WILL THE ORA PARTIES BE LIABLE TO YOU OR ANYONE ELSE FOR LOSS, DAMAGE OR INJURY, INCLUDING, WITHOUT LIMITATION, DEATH OR PERSONAL INJURY. SOME STATES DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE LIMITATION OR EXCLUSION MAY NOT APPLY TO YOU. IN NO EVENT WILL THE ORA PARTIES’ TOTAL LIABILITY TO YOU FOR ALL DAMAGES, LOSSES OR CAUSES OF ACTION EXCEED THE AMOUNT PAID, IF ANY, BY YOU TO US FOR THE SERVICES DURING THE PRECEDING YEAR. YOU AGREE THAT IN THE EVENT YOU INCUR ANY DAMAGES, LOSSES OR INJURIES THAT ARISE OUT OF ORA'S ACTS OR OMISSIONS, THE DAMAGES, IF ANY, CAUSED TO YOU ARE NOT IRREPARABLE OR SUFFICIENT TO ENTITLE YOU TO AN INJUNCTION PREVENTING ANY EXPLOITATION OF ANY WEB SITE, SERVICE, PROPERTY, PRODUCT OR OTHER CONTENT OWNED OR CONTROLLED BY THE ORA PARTIES, AND YOU WILL HAVE NO RIGHTS TO ENJOIN OR RESTRAIN THE DEVELOPMENT, PRODUCTION, DISTRIBUTION, ADVERTISING, EXHIBITION OR EXPLOITATION OF ANY WEB SITE, PROPERTY, PRODUCT, SERVICE, OR OTHER CONTENT OWNED OR CONTROLLED BY THE ORA PARTIES. BY ACCESSING THE SERVICES, YOU UNDERSTAND THAT YOU MAY BE WAIVING RIGHTS WITH RESPECT TO CLAIMS THAT ARE AT THIS TIME UNKNOWN OR UNSUSPECTED. We reserve the right, at any time, in our sole and exclusive discretion, to amend, modify, suspend, or terminate the Services, or any part thereof, and/or your use of or access to them, with or without notice. We shall have no liability to you or any other person or entity for any modification, suspension, or termination, or any loss of related information.
Indemnity: To the fullest extent permitted by applicable law, you agree to indemnify, defend and hold harmless the ORA Parties, from and against all actual or alleged any claims, liabilities, damages, losses, costs, and/or expenses, including without limitation, reasonable attorney’s fees and costs, arising out of or in any way connected with the following (whether resulting from your activities on the Services or those conducted on your behalf): (i) your access to or use of the Services; (ii) your breach or alleged breach of these Terms; or (iii) your violation of any third-party right, including without limitation, any intellectual property right, publicity, confidentiality, property, or privacy right. You agree that the ORA Parties will have no liability in connection with any such breach or unauthorized use, and you agree to indemnify, defend, and hold harmless any and all resulting loss, damages, judgments, awards, costs, expenses, and attorneys’ fees of the ORA Parties in connection therewith. You will cooperate as fully required by us in the defense of any claim. The ORA Parties reserve the right to assume exclusive control of its defense in any matter subject to your indemnification, which shall not excuse your obligation to indemnify the ORA Parties. You shall not settle any claim without the prior written consent of ORA.
Dispute Resolution, Agreement to Arbitrate, and Class Action Waiver This Section includes an arbitration agreement and an agreement that all claims will be brought only in an individual capacity (and not as a class action or other representative proceeding). Please read it carefully.
Informal Process First. You agree that in the event of any dispute between you and ORA, you will first contact us and make a good faith sustained effort to resolve the dispute before resorting to more formal means of resolution, including without limitation any court action, after first allowing the receiving party 30 days in which to respond.
Arbitration Agreement and Class Action Waiver. After the informal dispute resolution process, any remaining dispute, controversy, or claim (collectively, "Claim") relating in any way to the Services, any use or access or lack of access thereto, and any other usage even if interacted with outside of the Services, will be resolved by arbitration, including threshold questions of arbitrability of the Claim. You and ORA agree that any Claim will be settled by final and binding arbitration, using the English language, administered by JAMS under its Comprehensive Arbitration Rules and Procedures (the "JAMS Rules") then in effect (those rules are deemed to be incorporated by reference into this section, and as of the date of these Terms). Because your contract with us, these Terms, and this Arbitration Agreement concern interstate commerce, the Federal Arbitration Act ("FAA") governs the arbitrability of all disputes. However, the arbitrator will apply applicable substantive law consistent with the FAA and the applicable statute of limitations or condition precedent to suit. Arbitration will be handled by a sole arbitrator in accordance with the JAMS Rules. Judgment on the arbitration award may be entered in any court that has jurisdiction. Any arbitration under these Terms will take place on an individual basis -- class arbitrations and class actions are not permitted. You understand that by agreeing to these Terms, you and ORA are each waiving the right to trial by jury or to participate in a class action or class arbitration.
Batch Arbitration. To increase the efficiency of administration and resolution of arbitrations, you and ORA agree that in the event that there are one-hundred (100) or more individual Claims of a substantially similar nature filed against us by or with the assistance of the same law firm, group of law firms, or organizations, then within a thirty (30) day period (or as soon as possible thereafter), JAMS shall (1) administer the arbitration demands in batches of 100 Claims per batch (plus, to the extent there are less than 100 Claims left over after the batching described above, a final batch consisting of the remaining Claims); (2) appoint one arbitrator for each batch; and (3) provide for the resolution of each batch as a single consolidated arbitration with one set of filing and administrative fees due per side per batch, one procedural calendar, one hearing (if any) in a place to be determined by the arbitrator, and one final award (“Batch Arbitration”). All parties agree that Claims are of a “substantially similar nature” if they arise out of or relate to the same event or factual scenario and raise the same or similar legal issues and seek the same or similar relief. To the extent the parties disagree on the application of the Batch Arbitration process, the disagreeing party shall advise JAMS, and JAMS shall appoint a sole standing arbitrator to determine the applicability of the Batch Arbitration process (“Administrative Arbitrator”). In an effort to expedite resolution of any such dispute by the Administrative Arbitrator, the parties agree the Administrative Arbitrator may set forth such procedures as are necessary to resolve any disputes promptly. The Administrative Arbitrator’s fees shall be paid by us. You and ORA agree to cooperate in good faith with JAMS to implement the Batch Arbitration process including the payment of single filing and administrative fees for batches of Claims, as well as any steps to minimize the time and costs of arbitration, which may include: (1) the appointment of a discovery special master to assist the arbitrator in the resolution of discovery disputes; and (2) the adoption of an expedited calendar of the arbitration proceedings. This Batch Arbitration provision shall in no way be interpreted as authorizing a class, collective and/or mass arbitration or action of any kind, or arbitration involving joint or consolidated claims under any circumstances, except as expressly set forth in this provision.
Exceptions. Notwithstanding the foregoing, you and ORA agree that the following types of disputes will be resolved in a court of proper jurisdiction: (i) disputes or claims within the jurisdiction of a small claims court consistent with the jurisdictional and dollar limits that may apply, as long as it is brought and maintained as an individual dispute and not as a class, representative, or consolidated action or proceeding; (ii) disputes or claims where the sole form of relief sought is injunctive relief (including public injunctive relief); or (iii) intellectual property disputes.
Costs of Arbitration. Payment of all filing, administration, and arbitrator costs and expenses will be governed by the JAMS Rules, except that if you demonstrate that any such costs and expenses owed by you under those rules would be prohibitively more expensive than a court proceeding, ORA will pay the amount of any such costs and expenses that the arbitrator determines are necessary to prevent the arbitration from being prohibitively more expensive than a court proceeding (subject to possible reimbursement as set forth below).
WAIVER OF RIGHT TO BRING CLASS ACTION AND REPRESENTATIVE CLAIMS. TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, YOU AND ORA EACH AGREE THAT ANY PROCEEDING TO RESOLVE ANY DISPUTE, CLAIM OR CONTROVERSY WILL BE BROUGHT AND CONDUCTED ONLY IN THE RESPECTIVE PARTY'S INDIVIDUAL CAPACITY AND NOT AS PART OF ANY CLASS (OR PURPORTED CLASS), CONSOLIDATED, MULTIPLE-PLAINTIFF, OR
REPRESENTATIVE ACTION OR PROCEEDING ("CLASS ACTION"). YOU AND ORA AGREE TO WAIVE THE RIGHT TO PARTICIPATE AS A PLAINTIFF OR CLASS MEMBER IN ANY CLASS ACTION. YOU AND ORA EXPRESSLY WAIVE ANY ABILITY TO MAINTAIN A CLASS ACTION IN ANY FORUM. IF THE DISPUTE IS SUBJECT TO ARBITRATION, THE ARBITRATOR WILL NOT HAVE THE AUTHORITY TO COMBINE OR AGGREGATE CLAIMS, CONDUCT A CLASS ACTION, OR MAKE AN AWARD TO ANY PERSON OR ENTITY NOT A PARTY TO THE ARBITRATION. FURTHER, YOU AND ORA AGREE THAT THE ARBITRATOR MAY NOT CONSOLIDATE PROCEEDINGS FOR MORE THAN ONE PERSON'S CLAIMS, AND IT MAY NOT OTHERWISE PRESIDE OVER ANY FORM OF A CLASS ACTION. FOR THE AVOIDANCE OF DOUBT, HOWEVER, YOU CAN SEEK PUBLIC INJUNCTIVE RELIEF TO THE EXTENT AUTHORIZED BY LAW AND CONSISTENT WITH THE EXCEPTIONS CLAUSE ABOVE. 3. IF THIS CLASS ACTION WAIVER IS LIMITED, VOIDED, OR FOUND UNENFORCEABLE, THEN, UNLESS THE PARTIES MUTUALLY AGREE OTHERWISE, THE PARTIES' AGREEMENT TO ARBITRATE SHALL BE NULL AND VOID WITH RESPECT TO SUCH PROCEEDING SO LONG AS THE PROCEEDING IS PERMITTED TO PROCEED AS A CLASS ACTION. IF A COURT DECIDES THAT THE LIMITATIONS OF THIS PARAGRAPH ARE DEEMED INVALID OR UNENFORCEABLE, ANY PUTATIVE CLASS, PRIVATE ATTORNEY GENERAL OR CONSOLIDATED OR REPRESENTATIVE ACTION MUST BE BROUGHT IN A COURT OF PROPER JURISDICTION AND NOT IN ARBITRATION. YOU AGREE THAT ANY CLAIM YOU MAY HAVE ARISING OUT OF OR RELATED TO YOUR RELATIONSHIP WITH ORA MUST BE BROUGHT WITHIN ONE (1) YEAR AFTER SUCH CLAIM AROSE; OTHERWISE, YOUR CLAIM WILL BE PERMANENTLY BARRED. 12. Governing Law and Jurisdiction Unless specified otherwise, these Terms shall be governed by the laws of the British Virgin Islands. You agree to submit to the personal jurisdiction of the courts located in British Virgin Islands for any actions for which we retain the right to seek injunctive or other equitable relief.
External Links The Services may contain links to other third-party website and/or applications or otherwise re-direct you to other third-party website, applications (including meeting software) or services (collectively, the “Linked Websites”). The Linked Websites are not under our control, and we are not responsible for any Linked Websites, including, but not limited to, any content contained in a Linked Websites or any changes or updates to the Linked Websites. The Linked
Websites may require you to agree to additional terms and conditions between you and such third party. When you click on a link to Linked Websites, we will not warn you that you have left the Services and are subject to the terms and conditions (including privacy policies, if and as applicable) of another website or destination. WE ARE NOT RESPONSIBLE FOR ANY SUCH TERMS AND CONDITIONS OR ANY DAMAGES YOU MAY INCUR BY USING THE LINKED WEBSITES. ORA provides these Linked Websites only as a convenience and does not review, approve, monitor, endorse, warrant, or make any representations with respect to the Linked Websites or their products or services. You use all links in the Linked Websites at your own risk.
Assignment These Terms and any rights and licenses granted hereunder, may not be transferred, or assigned by you, but we may assign them without restriction. Any attempted transfer or assignment in violation hereof will be null and void.
Severability If any provision of the Terms is deemed invalid by a court of competent jurisdiction, the invalidity of such provision shall not affect the validity of the remaining provisions of the Terms, which shall remain in full force and effect.
Entire Agreement These Terms, together with any amendments and any additional agreements you may enter with us in connection with the Services shall constitute the entire agreement between you and us concerning the Services.
Entire Agreement No waiver of any term of the Terms shall be deemed a further or continuing waiver of such term or any other term, and our failure to assert any right or provision under the Terms shall not constitute a waiver of such right or provision.
ORA Privacy Policy
Last Modified:
Sep 23, 2024
Introduction
Hyper Oracle INC. ("ORA", "we", "us") respects your privacy and provides this Privacy Policy (the "policy") to inform you about the personal information we may collect from you or that you may provide when you visit the website (our "Website") and our treatment of and practices related to collecting, using, and disclosing personal information.
As used in this policy, "personal information" relates to information that identifies, relates to, describes, or could reasonably be linked with an identifiable natural person. This policy applies to personal information we collect:
On this Website.
In email, text, and other electronic communications between you and us.
Through mobile and desktop applications you download from this Website, which provides dedicated non-browser-based interaction between you and this Website.
This policy does not apply to websites, applications, or services that do not display or link to this policy or that display or link to different privacy statements. Please read this policy carefully to understand our policies and practices regarding your personal information and how we will treat it. By accessing or using this Website, you consent to this policy and our Terms of Service.
Information We Collect About You and How We Collect It We may collect the following categories of personal information from and about users of our Website:
Identifiers: such as name, postal address, e-mail address, telephone number, social media username, or other identifying information.
Device Information: such as internet protocol (IP) address, web browser type, operating system version, phone carrier and manufacturer, [application installations], device identifiers, and mobile advertising identifiers.
Usage data: such as internet or other electronic network activity information including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an internet website, application, or advertisement.
Profiles and inferences: inferences drawn from any of the information identified above to create a profile reflecting your preferences, characteristics, psychological trends, predispositions, behavior, attitudes, intelligence, abilities, or aptitudes.
Commercial information: including records of services rendered, purchased, obtained, or considered, or other purchasing or use histories or tendencies.
Job application information: such as your resume or CV, cover letter, work experience or other experience, education, transcripts, demographic information processed during the application or recruitment process such as gender, information about your citizenship and/or nationality, medical or health information, your racial or ethnic origin, or other information you provide to us in connection with or support of an application.
Blockchain information: such as wallet information, account balances, and on- chain transaction information. Although this information may not be considered personal information on its own, we may combine it with personal information collected from our Website and/or services. We collect this information:
Directly from you when you provide it to us.
Automatically as you navigate through the site. Information collected automatically may include usage details, IP addresses, and information collected through cookies, web beacons, and other tracking technologies.
From third parties, such as our service providers and business partners.
Information You Provide to Us The information we may ask you to provide and that we will collect may include
Information that you provide by filling in forms on our Website. This includes information provided at the time of registering for an account on our Website.
Records and copies of your correspondence (including email addresses), if you contact us.
Your responses to surveys that we might ask you to complete for research purposes.
Details of transactions you carry out through our Website and of the fulfillment of your orders. You may be required to provide financial information before placing an order through our Website.
Your personal information that you provided for any sweepstakes or contests. In some jurisdictions, we are required to publicly share information of sweepstakes and contest winners.
Your sign ups to conferences, trade shows, and other events that we host or are associated with.
If you are applying for a work or service provider position, we may collect information about your work experience, your resume, the URL to your personal or professional websites, and your skill sets and certifications. You also may provide information to be published or displayed (hereinafter, "posted") on public areas of the Website, or transmitted to other users of the Website or third parties (collectively, "User Contributions"). Your User Contributions are posted on and transmitted to others at your own risk. Although [we limit access to certain pages/you may set certain privacy settings for such information by logging into your account profile], please be aware that no security measures are perfect or impenetrable. Additionally, we cannot control the actions of other users of the Website with whom you may choose to share your User Contributions. Therefore, we cannot and do not guarantee that your User Contributions will not be viewed by unauthorized persons.
Information We Collect Through Automatic Data Collection Technologies As you navigate through and interact with our Website, we may use automatic data collection technologies to collect certain information about your equipment, browsing actions, and patterns, including:
Details of your visits to our Website, including traffic and usage data, location data, logs, and data about the pages and areas that you access and interact with on the Website.
Information about your computer and internet connection, including your IP address, operating system, and browser type. We also may use these technologies to collect information about your online activities over time and across third-party websites or other online services (behavioral tracking).
The personal information we collect automatically is aggregated statistical data and does not] include personal information, but we may maintain it or associate it with personal information we collect in other ways or receive from third parties]. It helps us to improve our Website and to deliver a better and more personalized service, including by enabling us to:
Estimate our audience size and usage patterns.
Store information about your preferences, allowing us to customize our Website according to your individual interests.
Speed up your searches.
Remember your preferences while you use our Website.
The technologies we use for this automatic data collection may include:
Cookies: A cookie is a small file placed on the hard drive of your computer. You may refuse to accept browser cookies by activating the appropriate setting on your browser. However, if you select this setting you may be unable to access certain parts of our Website. Unless you have adjusted your browser setting so that it will refuse cookies, our system will issue cookies when you direct your browser to our Website.
Performance and Analytics Cookies: These cookies allow us to count visits and traffic sources, through various third parties, so we can measure and improve the performance of our Website. These enable us to collect information about how you use our Website or read our publications, for instance which pages are viewed by visitors most frequently and how users interact with each of our Website. This information is used to compile reports to improve the respective site, including reports on the number of visitors to the Website, where the visitors are located, marketing and referrals, and what pages the users visit on the Website. All information these cookies collect is aggregated and is not associated with an identifiable individual. If you do not allow these cookies, we will not know when you have visited our Website and will not be able to monitor site performance.
Web Beacons: Pages of our the Website [and our e-mails] may contain small electronic files known as web beacons (also referred to as clear gifs, pixel tags, and single-pixel gifs) that permit us, for example, to count users who have visited those pages or [opened an email] and for other related website statistics (for example, recording the popularity of certain website content and verifying system and server integrity).
Pixel Tags: Pixel tags are tiny graphic images with a unique identifier, similar in function to cookies, that are used to track online movements of web users. In contrast to cookies, which are stored on a computer’s hard drive or web browser, pixel tags are embedded invisibly in web pages. Pixel tags are often used in combination with cookies, to trigger the placing of cookies, and/or to transmit information to us or our vendors or partners. This enables two websites to share information. The resulting connection can include information such as a device’s IP address, the time a person viewed the pixel, an identifier associated with a browser or device, the type of browser being used, and the URL of the web page from which the pixel was viewed. Pixel tags also allow us to send email messages in a format that users can read, tell us whether emails have been opened, and help to ensure we are sending only messages that may be of interest to our consumers.
Most browsers automatically accept cookies, but you may be able to control the way in which your devices permit the use of cookies, web beacons/clear gifs, and other automatic data collection technologies. If you so choose, you may block or delete our cookies from your browser; however, blocking or deleting cookies may cause some Website features and general functionality to work incorrectly. To opt out of tracking by Google Analytics, click here. Your browser or device settings may allow you to transmit a "Do Not Track" or a "Global Privacy Control" signal when you visit our Website. At this time we do not respond to such signals.
How We Use Your Information
We use personal information that we collect about you or that you provide to us, including any personal information:
To provide our Website and its contents to you.
To provide you with information, products, or services that you request from us.
To fulfill any other purpose for which you provide it.
To provide you with notices about your account, including information about purchases.
To carry out our obligations and enforce our rights arising from any contracts entered into between you and us, including for billing and collection. To fulfill any other purpose for which you provide it.
To notify you about changes to our Website or any products or services we offer or provide though it.
To process job applications, conduct background checks, and assess your skills, qualifications, and interest in our career opportunities.
For other purpose we may disclose when you provide the personal information.
To analyze user's blockchain activity and transaction information to improve our services and Website.
For any other purpose with your consent. We may use the information we have collected from you to enable us to display advertisements to our advertisers' target audiences. Even though we do not disclose your personal information for these purposes without your consent, if you click on or otherwise interact with an advertisement, the advertiser may assume that you meet its target criteria.
Disclosure of Your Information
We may disclose aggregated information about our users[, and information that does not identify any individual,] without restriction. We may disclose personal information that we collect or you provide as described in this policy:
To our subsidiaries and affiliates.
To contractors, service providers, and other third parties we use to support our business and who are bound by contractual obligations to keep personal information confidential and use it only for the purposes for which we disclose it to them.
To a buyer or other successor in the event of a merger, divestiture, restructuring, reorganization, dissolution, or other sale or transfer of some or all of our assets, whether as a going concern or as part of bankruptcy, liquidation, or similar proceeding, in which personal information held by us about our Website users is among the assets transferred.
To fulfill the purpose for which you provide it. For example, if you give us an email address to sign up for our newsletter, we may transmit your email address to our third-party service provider that sends the newsletter on our behalf.
For any other purpose disclosed by us when you provide the personal information.
With your consent. We may also disclose your personal information:
To comply with any court order, law, or legal process, including responding to any government or regulatory request.
If we believe disclosure is necessary or appropriate to protect the rights, property, or safety of ORA, our customers, or others. This includes exchanging information with other companies and organizations for the purposes of conducting background checks, complying with Know Your Customer and anti-money laundering laws, fraud protection, and credit risk reduction.
Data Security
We have implemented measures designed to secure your personal information from accidental loss and from unauthorized access, use, alteration, and disclosure. Unfortunately, the transmission of personal information via the internet is not completely secure. Although we do our best to protect your personal information, we cannot guarantee the security of your personal information transmitted to our Website. Any transmission of personal information is at your own risk.
Children Under the Age of 18
Your Privacy Rights
California’s Shine the Light
Under California’s “Shine the Light” law (Cal. Civ. Code § 1798.83), California residents who provide us certain personal information are entitled to request and obtain from us, free of charge, information about the personal information (if any) we have shared with third parties during the immediately preceding calendar year for their own direct marketing use. Such requests may be made once per calendar year for information about any relevant third-party sharing in the prior calendar year. California residents who would like to make such a request may submit a request via the contact information provided below. The request should attest to the fact that the requester is a California resident and provide a current California address. We are only required to respond to a customer request once during any calendar year. Please be aware that not all information sharing is covered by California’s “Shine the Light” law and only information sharing that is covered will be included in our response.
Nevada
If you are a resident of Nevada, you have the right to opt-out of the sale of certain personal information to third parties who intend to license or sell that personal information. You can exercise this right by contacting us here. Please note that we do not currently sell your personal information as sales are defined in Nevada Revised Statutes Chapter 603A. If you have any questions, please contact us. International transfer of personal information
All personal information processed by us may be transferred, processed, and stored anywhere in the world, including, but not limited to, the United States or other countries, which may have data protection laws that are different from the laws where you live. We endeavor to safeguard your personal information consistent with the requirements of applicable laws.
If we transfer personal information which originates in the European Economic Area, Switzerland, and/or the United Kingdom to a country that has not been found to provide an adequate level of protection under applicable data protection laws, one of the safeguards we may use to support such transfer is the EU Standard Contractual Clauses.
For more information about the safeguards we use for international transfers of your personal information, please contact us as set forth below.
Contact Information
To ask questions or comment about this privacy policy and our privacy practices, contact us at:
Input: Prompt (). Output: Inference result ().
Input: Prompt (). Output: Inference result ().
Input: Instruction and prompt with the format of JSON String: {"instruction":"${instruction}","input":"${prompt}"}
(). Default instruction (if sending raw prompt only) is You are a helpful assistant
.
Output: Inference result ().
Input: Instruction and prompt with the format of JSON String: {"instruction":"${instruction}","input":"${prompt}"}
(). Default instruction (if sending raw prompt only) is You are a helpful assistant
.
Output: Inference result ().
Input: Prompt (). Output: IPFS hash of inference result (). Access with IPFS gateway, see .
Input: Prompt (). Output: IPFS hash of inference result (). Access with IPFS gateway, see .
Go to the
Go to the
Refer to the for more information.
Go to the
1) Go to ORA Staking Page: .
5) See your points in dashboard: .
1) Go to ORA Staking Page: .
1) Go to ORA Staking Page: .
2) Choose the pool you staked, or start from the dashboard page (), click on the arrow to open staking page.
Here is the audit report of ORA Staking: .
Reached when zk proof is verified, or dispute period is passed, and data becomes fully immutable and constant. See .
More information can be found on page.
is a machine learning proving framework. In uses game-theory to ensure the validity of AI inference results. The proving mechanism works similar to optimistic rollups approach.
is a machine learning proving framework. It combines cryptography and game-theory to ensure the validity of AI inference results.
Information about running Tora Client can be found under our .
The incentive is in form of ORA points. Read more on .
Each validated transaction will earn 3 points. Read more here:
Depending on your use case, you can choose from a variety of supported AI models. Please refer to the page, where you'll find all the essential details regarding each model, including supported blockchain networks, associated fees, and other relevant information.
ML inferences can be deterministic provided that the random seed is fixed and the inference is run using or in our deterministic VM. Learn more from .
Privacy, because all models in opML needs to be public and open-source for network participants to challenge. This can be mitigated with .
zkLLM and Ligetron data from EthBelgrade conference .
Modulus Labs zkML bringing GPT2-1B onchain resulted in a .
The zkML framework with EZKL .
, zkML has >>1000 times overhead than pure computation, with being 1000 times, and s.
According to , the average proving time of RISC Zero is of 173 seconds for Random Forest Classification.
, which compares our zkML framework to other leading solutions in the space.
EthBelgrade conference , which mentions zkLLM and Ligetron data.
OLM is the first AI model launched through the IMO framework. More details here
More questions? Reach out in our .
Fees and costs may be awarded as provided pursuant to applicable law. If the arbitrator finds that either the substance of your claim or the relief sought in the Demand is frivolous or brought for an improper purpose (as measured by the standards set forth in Federal Rule of Civil Procedure 11(b)), then the payment of all fees will be governed by the JAMS rules. In that case, you agree to reimburse us for all monies previously disbursed by it that are otherwise your obligation to pay under the applicable rules. If you prevail in the arbitration and are awarded an amount that is less than the last written settlement amount offered by us before the arbitrator was appointed, we will pay you the amount it offered in settlement. The arbitrator may make rulings and resolve disputes as to the payment and reimbursement of fees or expenses at any time during the proceeding and upon request from either party made within fourteen (14) days of the arbitrator's ruling on the merits. Opt-Out. You have the right to opt-out and not be bound by the arbitration provisions set forth in these Terms by sending written notice of your decision to opt-out to . The notice must be sent to us within thirty (30) days of your first registering to use the Services or agreeing to these Terms; otherwise you shall be bound to arbitrate disputes on a non-class basis in accordance with these Terms. If you opt out of only the arbitration provisions, and not also the class action waiver, the class action waiver still applies. You may not opt out of only the class action waiver and not also the arbitration provisions. If you opt-out of these arbitration provisions, we also will not be bound by them.
Contact Information If you have any questions about these Terms, please contact us at: .
To enforce or apply our and other agreements, including for billing and collection purposes.
Our Website is not intended for children under 18 years of age. No one under age 18 may provide any personal information to or on the Website. We do not knowingly collect personal information from children under 18. If you are under 18, do not use or provide any information on this Website, register for the newsletter, make any purchases through the Website, or provide any information about yourself to us, including your name, address, telephone number, email address, or any screen name or username you may use. If we learn we have collected or received personal information from a child under 18, we will delete that information. If you believe we might have any information from or about a child under 18, please contact us at .
Email:
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
AI Oracle Proxy
Prompt
SimplePrompt
OPML
A protocol for optimistic machine learning on chain
ORA Network
A network secured by ORA nodes.
AI Oracle
The contract which initiates and returns a inference request from the nodes in the ORA network
ORA Nodes
Machines that run the Tora client that interact with the ORA network. Nodes currently perform two functions: submit or validate inference results.
Tora client
The software client that allows users to run a ORA node
Initial Model Offering (IMO)
The concept of tokenizing AI models
ETH & LST (ETH, stETH, STONE)
10,000 in ETH & LST
8n Points per n ETH / LST Staked
None
OLM Flexible (OLM)
300,000,000 in OLM
24n Points per 10,000n OLM Staked
None
OLM Locked (OLM)
300,000,000 in OLM
48n Points per 10,000n OLM Staked
None 6 Months
Stablecoin (USDT & USDC)
1,000,000 in USDT & USDC
3n Points per 3,000n USDT / USDC Staked
None