Only this pageAll pages
Powered by GitBook
1 of 32

ORA

ORA

Loading...

Loading...

Loading...

OAO - Onchain AI Oracle

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

IMO

Loading...

Loading...

Loading...

Points

Loading...

Loading...

Loading...

Technology

Loading...

Loading...

Loading...

Loading...

Loading...

Resources

Loading...

Loading...

Get in touch

Tutorials

In this section, we provide a list of educational tutorials, that will help you get started with Onchain AI Oracle (OAO).

Interaction with OAO - Tutorial covers step by step creation of simple Prompt contract that interacts with OAO. Integration into OAO - Tutorial covers the process of creating model for your AI model and integration into OAO.

About ORA

Verifiable Oracle Protocol

Main product of ORA: Onchain AI Oracle (OAO), brings AI onchain.

ORA breaks down the limitations of smart contracts by offering AI inference, so developers can innovate freely.

ORA’s work has been trusted by Compound, Ethereum Foundation, Uniswap, Optimism, Arbitrum, and beyond.

Why ORA?

  • We build, they follow: We are the last oracle for developers. We are the only oracle that has shipped an AI Oracle which is practical to use on Ethereum right now.

  • Onchain AI engine: Our AI Oracle currently supports LlaMA 2 (7B) and Stable Diffusion. You can use and integrate them onchain directly. In the future, we will support any ML model.

  • Fast deployment: We are coming to networks near you, including any network and any L2. You can build your own AI oracle that is programmable, permissionless, and censorship-resistant.

ai/fi

ai/fi = AI + DeFi

The strategy to transform DeFi into ai/fi is simple: "identify the computationally intensive part of any DeFi protocol and replace it with AI."

How ORA Works

ORA provides developers with the tools necessary to build end-to-end trustless and decentralized applications empowered by Artificial Intelligence.

ORA's two main offerings are:

Architecture Overview

Initial Model Offering (IMO) incentivizes long-term and sustainable open-source contribution, by tokenizing ownership of open-source AI models. It allows funding for AI development, and rewards the community and open-source contributors. Token holders receive a portion of the revenue generated by onchain usage of the model.

Onchain AI Oracle (OAO) brings verifiable AI inference onchain. It empowers decentralized applications by providing trustless and permissionless use of AI. This opens up a variety of new use-cases, that weren't possible before. Use of Onchain AI Oracle implies a fee, which is later distributed to IMO token holders.

Together IMO and OAO are pushing the boundaries of open-source development.

Benefits for Developers

Developers can utilize Onchain AI Oracle (OAO) to supercharge their smart contracts with AI. Key features of OAO include:

  • All-in-one infrastructure with AI capability and automation

  • Higher performance and shorter finality

  • Run arbitrary program and any large ML model

Advantages of ORA Oracle Network

ORA Oracle Network consists of node operators who run AI Oracle nodes to execute and secure computations with verifiable proofs. Some of the key advantages of oracle network include:

  • Unstoppable autonomous protocol and network

  • Optimal cryptography-native decentralization

  • Verifiable, decentralized, and secure network

  • Safeguarding the security of the base layer

  • Efficient allocation of computing power

  • 1-of-N trust model

Introduction

If you want to build with onchain AI.

ORA offers:

OAO integrates different AI models onchain in ORA AI Oracle nodes.

Smart contract developers can build their own contract based on different models in AI Oracle, to interact with OAO contract, so that they can use AI onchain.

ORA is Ethereum's Trustless AI. ORA is the verifiable oracle protocol that brings AI and complex compute onchain.

ai/fi is the fusion of AI (verifiable AI inference provided by ) and DeFi.

Read more from ai/fi idea posts: , .

zkML of (the most battle-tested and performant zkML framework)

of (run huge model like LlaMA2-7B and Stable Diffusion now)

zk+opML with (futuristic onchain AI fuses zkML's privacy and opML's scalability)

is the ORA's verifiable and decentralized AI oracle.

Some of the example use cases are: AIGC NFT with ERC-7007, zkKYC using facial recognition based on ML, onchain AI games (e.g. Dungeon and Dragons), prediction market with ML, content authenticity (deepfake verifier), compliant programmable privacy, prompt marketplace, reputation/credit scoring... For example integrations and ideas to build, see .

Onchain AI Oracle
1
2
Onchain AI Oracle (OAO)
Initial Model Offering (IMO)
keras2circom
opML
AI Oracle
opp/ai
Onchain AI Oracle (OAO)
awesome-ora
System overview

opp/ai

opp/ai (Optimistic Privacy-Preserving AI), invented by ORA, represents an endgame onchain AI framework and an innovative approach to addressing the challenges of privacy and computational efficiency in blockchain-based machine learning systems. Opp/ai integrates Zero-Knowledge Machine Learning (zkML) for privacy with Optimistic Machine Learning (opML) for efficiency, creating a hybrid model tailored for onchain AI.

Opp/ai, as the latest fusion of zkML and opML, can include any zkML approach. It means that advances in zkML will be directly reflected in opp/ai.

Onchain Fine-tuned Model with Privacy

Opp/ai can be utilized to conceal the fine-tuning weights of models where the majority of the weights are already publicly available. This is particularly relevant for open-source models that have been fine-tuned for specialized tasks. For instance, the LoRA weights in the attention layers of the Stable Diffusion model can be protected using opp/ai framework.

This capability is crucial for preserving the proprietary enhancements made to publicly shared models, ensuring that while the base model remains accessible, the unique adaptations that provide competitive advantage remain confidential.

Use-cases

Individual voice tuning in text-to-voice models: Text-to-voice service providers may offer personalized voice models that are tailored to the individual's voice characteristics. These personalized models are sensitive and contain valuable data. The opp/ai framework can ensure that the personalized voice model's parameters remain confidential while still offering the service to end-users verifiably.

Financial sector: Trading algorithms are developed to predict market movements and execute trades automatically. These algorithms are highly valuable and contain sensitive strategies that firms wish to protect. A financial institution could use the opp/ai framework to conceal the weights of a model that has been specifically tuned to its trading strategy.

Gaming industry: AI models are used to create challenging and engaging non-player characters (NPCs). Game developers may fine-tune these models to create unique behaviors or strategies that are specific to their game. By using the opp/ai framework, developers can hide the fine-tuned weights that contribute to the NPCs' competitive edge, preventing other developers from copying these features while still providing an immersive gaming experience.

Further readings:

  • Research paper on opp/ai.

Proving Frameworks (zkML, opML, opp/ai)

To establish a verifiable and decentralized oracle network, it's critical to ensure the computation validity of results on the blockchain. This process involves a proof system that ensures the computation is reliable and truthful. By doing so, we can enhance the integrity and trustworthiness of decentralized applications that rely on any-size compute, including AI inference.

Several technologies invented and developed by ORA have emerged to facilitate the verifiable computation including AI inference on the blockchain. These innovations include Optimistic Machine Learning (opML), Keras2Circom (Zero-Knowledge Machine Learning, zkML), and Optimistic Privacy Preserving AI (opp/ai), with each representing a significant stride towards integrating verifiable proofs into the blockchain.

Introduction

https://www.ora.io/app/tasks

Overview

ORA Points Program is focused on identifying contributors who engage with ORA ecosystem. 10% of ORA token will be distributed to ORA Points Program.

Points will act as an indicator for us to identify the most committed for their early support of ORA’s ecosystem as it evolves.

See your points at Points Dashboard.

Tasks

You can earn points in the following ways:

  1. Interact with OAO on networks including Ethereum/Optimism mainnet through ORA's AI Page

  2. Staking in ORA's Staking

  3. Participate in ORA's IMO

  4. ...more ways coming

Technology

Enable IMO with Onchain AI Model and Revenue Sharing Token

IMO requires two core components:

  • Onchain AI Model with Verifiability

  • Revenue Sharing of Onchain Usage

a) Onchain AI Model

IMO need a way to run AI models fully on chain and verifiably.

We invented opML and opp/ai, the only two solutions to make any AI model onchain.

Currently, opML is at the core of OAO (Onchain AI Oracle), which is essential to bring AI models to IMO.

b) Revenue Sharing

Holders of IMO tokens will receive the benefits of revenue streams including but not limited to:

  • Revenue of model usage (Model Ownership, ERC-7641 Intrinsic RevShare Token): Each use of the AI model onchain will incur a fee, which will be distributed to IMO tokens.

  • Revenue of AI-generated content (Inference Asset, eg. ERC-7007 Zero-Knowledge AI-Generated Content Token): Each use of the AI model generates a specific output and result (e.g. Stable Diffusion for an image NFT and Sora for a video NFT), which may carry a royalty fee and a mint fee that can be distributed to IMO tokens.

We standardized ERC-7641 Intrinsic RevShare Token to achieve revenue sharing of IMO model token.

User Guide

Usage Guide for Anyone to Use Onchain AI Directly as User

As a user, to interact with AI Oracle, you can:

  • Use AI.ORA.IO frontend.

  • Interact with Prompt contract directly on Etherscan

1. Use ORA's Frontend Interface

We built an interface for users to interact with Onchain AI Oracle directly.

  1. Go to AI.ORA.IO

  2. Enter your prompt

  3. Send transaction

  4. See AI inference result

Check out the video tutorial if you have any question.

2. Interact on Etherscan

Here's the guide to use AI Oracle by interacting with Prompt contract using Etherscan:

  1. In Prompt contract's Read Contract section, call estimateFee with specified modelId.

  1. In Prompt contract's Write Contract section, call calculateAIResult with fee (converted from wei to ether), prompt, and modelId.

  1. In AIOracle contract, watch for new transaction that fulfills the request you sent.

  1. In Prompt contract's Read Contract section, call getAIResult with previous modelId and prompt to see the AI inference result.

Comparison of Proving Frameworks

opML vs zkML

ORA leverages opML for Onchain AI Oracle because it’s the most feasible solution on the market for running any-size AI model onchain. The comparison between opML and zkML can be viewed from the following perspectives:

  • Proof system: opML uses fraud proofs, while zkML uses zk proofs.

  • Performance: opML is much more performant, while zkML has long proof generation time and extremely high memory consumption (ref1, ref2, ref3, ref4, ref5, ref6).

  • Security: opML uses crypto-economic based security, while zkML uses cryptography based security.

  • Finality: We can define the finalized point of zkML and opML as follows:

    • zkML: Zero-knowledge proof of ML inference is generated (and verified).

    • opML: Challenge period of ML inference is passed. With additional mechanisms, faster finality can be achieved in much shorter time than the challenge period.

opp/ai

Opp/AI combines both opML and zkML approaches to achieve scalability and privacy. It preserves privacy while being more efficient than zkML.

Compared to pure zkML, opp/ai has much better performance with the same privacy feature.

Develop Guide

Pre-requisites

Required

Optional

  • AI development experience

Resources

Workflow

  1. The user contract sends the AI request to OAO on chain, by calling requestCallback function on the OAO contract.

  2. Each AI request will initiate an opML request.

  3. OAO will emit a requestCallback event which will be collected by opML node.

  4. opML node will run the AI inference, and then upload the result on chain, waiting for the challenge period.

    1. During the challenge period, the opML validators will check the result, and challenge it if the submitted result is incorrect.

    2. If the submitted result is successfully challenged by one of the validators, the submitted result will be updated on chain.

    3. After the challenge period, the submitted result on chain is finalized.

  5. When the result is uploaded or updated on chain, the provided result in opML will be dispatched to the user's smart contract via its specific callback function.

Integration

Overview

To integrate with OAO, you will need to write your own contract.

Smart Contract Integration

  1. Inherit AIOracleCallbackReceiver in your contract and bind with a specific OAO address:

  2. Write your callback function to handle the AI result from OAO. Note that only OAO can call this function:

  3. When you want to initiate an AI inference request, call OAO as follows:

Application Integration with OAO Fee

Usage of OAO requires fee for each request.

It is required to obtain the current fee by calling estimateFee in OAO or your integrated contract with specific model id, then proceed to send request.

Flow of getting and setting fee is:

  1. Call to read estimateFee (on OAO or your contract if implemented) for inference with certain model

Tasks

Here is an updated list of active tasks you can do to earn points.

Task 1: Use ORA Onchain AI Oracle

3 - 6 Points Per Usage, Repeatable Task

ORA OAO enables verifiable, decentralized AI inference onchain. You can call AI directly onchain through OAO.

To complete:

  1. Select a model to use

  2. Sign transaction for calling AI request

You will earn more points if you are using a model tokenized with IMO (right now, there's one available, OLM)

Task 2: Referral

10% of Referee’s Points (not including Twitter Reward and Referee Reward) as Referrer, 5 Points as Referee, Repeatable for Referring others

The ORA Points Program comes with a Referral System. There are 2 ways to partake: i) Join as a referee through other people’s referral code or ii) become a referrer and share ORA Points Program with others.

Here are the steps for confirming your referee status:

  1. Fill in your referral code in “Enter Code” section

  2. You will receive 5 Points for this confirmation as referee

Here are the steps for referring others:

  1. Locate your referral code on Dashboard

  2. Distribute referral code to others

  3. Once others used your referral code, you will get 10% of the referee’s points continuously

Task 3: Staking

More Tasks

In the future, we will enable more tasks for users to learn about and participate in ORA’s ecosystem.

ORA Points Program Tips

Here are some tips to take part in the ORA Points Program effectively:

  • Try and call all models on all chains at different cost and features.

  • Spread the word and share your referral code widely.

  • Encourage your referees to become an active participant since points equivalent to 10% of theirs will be credited to you.

Basic knowledge of Ethereum smart contract development ()

Source code of OAO:

For supported models in OAO and deployment addresses, see page.

For example integrations and ideas to build, see .

Check out .

To build with AI models of OAO, we provided an example of integration to LlaMA2 model: .

Get estimated fee in wei (eg. )

Call to write and request AI inference on your contract, and fill in estimated fee in ether (eg. )

Go to the

Use OAO on mainnets (eg. Ethereum mainnet, Optimism mainnet…). Follow if you have questions.

Once AI results are returned on the same page, you can access and see your score in the .

Go to the

Go to the

See for more information.

We may host special challenges featuring bonus points. Stay up-to-date with our official communications on .

Regularly check the page for new developments and additional tasks.

If you experience any issues, feel free to reach out on .

constructor(IAIOracle _aiOracle) AIOracleCallbackReceiver(_aiOracle) {}
function aiOracleCallback(uint256 requestId, bytes calldata output, bytes calldata callbackData) external override onlyAIOracleCallback()
aiOracle.requestCallback(modelId, input, address(this), gas_limit, callbackData);
tutorials
https://github.com/ora-io/OAO
Reference
awesome-ora
video tutorial on interacting and building with OAO
Prompt
21300788080000000
0.02130078808
ORA OAO Page
this video tutorial
ORA Points Dashboard
ORA Points Program Dashboard
ORA Points Program Dashboard
Staking Page
ORA’s X account
ORA Points Program
ORA’s Discord

Introduction

Initial Model Offering

In this age of AI, ORA is introducing a new mechanism, IMO (Initial Model Offering).

TL;DR: IMO tokenizes AI model onchain.

  • For AI models, IMO enables sustainable funding for open-source AI models.

  • For ecosystems, IMO helps align values and incentives for distribution and ongoing contribution.

  • For token holders, IMO lets anyone capture the value of AI models onchain, from sources including onchain revenue and inference assets (eg. ERC-7007).

Many open-sourced AI models face the challenge in monetizing their contributions, leading to a lack of motivation for contributors and organizations alike. As a result, the AI industry is currently led by closed-source, for-profit companies. The winning formula for open-source AI models is the need to gather more funding and build in public.

With IMO, we can win the fight for open-source AI. IMO can enable the sustainability of the open-source AI model’s ecosystem by fostering long-term benefits and encouraging engagement and funding to the open-source AI community. The win is when we have better open-source models than proprietary models.

IMO tokenizes the ownership of open-source AI models, sharing its profit to the token holders.

Learn more:

  • Blog - IMO: Initial Model Offering

  • Announcement - IMO: Initial Model Offering

  • ETHDenver Talk - AI Isn't Evil and Smart Contract Proves It

  • Sample ERC-7641 implementation

Keras2Circom (zkML)

https://github.com/ora-io/keras2circom

Introduction

ZkML is a proving framework that leverages Zero-Knowledge proofs to prove the validity of ML inference result on-chain. Due to its private nature it can protect confidential data and model parameters during training and inference, thus addressing privacy issues and reducing blockchain's computational load.

Keras2Circom, built by ORA, is the first advanced zkML framework that is battle-tested. From a recent benchmark by Ethereum Foundation ESP Grant Proposal [FY23-1290] on leading zkML frameworks, Keras2Circom and its underlying circomlib-ml are proven to be performant than other frameworks.

Ecosystem

Besides being production-ready, circomlib-ml has rich ecosystem:

  • nova-ml by Ethereum Foundation

  • zator

  • ZKaggle

  • Picus

Ecosystem

Permissionless Technology for AI x Crypto

IMO, introduced by ORA, is permissionless, so anyone and any community can carry out an IMO for their AI model.

IMO tokenizes specific AI models, which gives:

  • The community the ability to efficiently fundraise for open-source.

  • Contributors incentives to continue improving a globally accessible model.

  • Token holders revenue opportunities from the use of the model onchain.

IMO is steering us to a future where AI is sustainable, diverse and open for all.

AI x Crypto Pyramid

Based on the framework of IMO, the ecosystem is evolving towards a more structured and layered approach in AI x Crypto advancement.

The foundation begins with IMO (Initial Model Offering), focusing on the tokenization of foundation models and decentralized networks.

Moving up, the IAO (Initial Agent Offering) introduces fine-tuned models tailored for specific tasks, enhancing the adaptability and precision of AI agents.

At the apex, Inference Assets represent the assets of verifiable decentralized inference, ensuring that the AI DApps operate with integrity and transparency.

Inference Assets

Inference Assets are tokenized representations (usually compatible with ERC-7007: Verifiable AI-Generated Content Token) of AI inference results. For instance, an NFT featuring AI-generated image from onchain Stable Diffusion model is considered an inference asset.

These assets, derived from onchain AI, offer additional revenue streams for onchain AI through mechanisms such as royalty fees associated with these inference asset NFTs.

For more information, check out ERC-7007: Verifiable AI-Generated Content Token.

Staking

https://ora.io/app/stake

Introduction

ORA Points Program Staking is designed to ensure the security of the upcoming fully decentralized AI Oracle network.

You can stake your assets, and receive ORA Points as reward by contributing to the decentralization of AI Oracle network secured by opML.

Asset
Total Cap
Points Per Day

ETH Pool (ETH, stETH, STONE)

10,000 in ETH & LST

8 Points with 1 ETH & LST Staked

OLM Pool (OLM)

300,000,000 in OLM

24 Points with 10,000 OLM Staked

Supported Assets

Currently, ORA Points Program Staking supports these assets to be staked in:

  • ETH (Native ETH)

  • stETH (Lido)

  • STONE (StakeStone)

  • OLM (OpenLM)

Total Staking Cap

At the first phase, ORA Points Program Staking is having a cap / limit on the total staked assets.

  • ETH & LST Pool (ETH, stETH, STONE): 10,000 in ETH & LST

  • OLM Pool (OLM): 300,000,000 in OLM

Staking Guide

Stake

1) Go to ORA Staking Page: https://ora.io/app/stake.

2) Select the token you want to stake, and enter amount.

3) Press "CONFIRM", and sign the transaction in your wallet. You are receiving points once transaction is confirmed on blockchain.

4) See your points in dashboard: https://ora.io/app/tasks/dashboard. Please note your points will be distributed every day, instead of every hour.

Withdraw

Withdraw will be enabled 1 month after the genesis of ORA Points Program Staking for effectively securing the AI Oracle network.

1) Go to ORA Staking Page: https://ora.io/app/stake.

2) Switch to "WITHDRAW" tab.

3) Select the staked token you want to withdraw, and enter amount.

4) Press "INITIATE WITHDRAW", and sign the transaction in your wallet. You initiate withdraw once transaction is confirmed on blockchain.

For ensuring the security of ORA Points Program Staking, all funds will go through 1 day before being eligible to be fully withdrawn.

5) Switch to the second window of "WITHDRAW" tab by clicking the arrow button at the bottom right.

6) Press "COMPLETE WITHDRAW" when the assets are eligible to be fully withdrawn, and sign the transaction in your wallet. You complete withdraw once transaction is confirmed on blockchain.

Staking Points

Staking Points is measured based on the time-integrated amount staked in units of TOKEN * days.

Here's the formula for points of one asset being staked, with p as total points awarded, t as the staking time if you stake for n days, c as the constant of staking points for different assets, m as the amount of staked asset.

f(p)=∑t=0nc∗mf(p) = \sum_{t=0}^{n} c*mf(p)=t=0∑n​c∗m

The constant of staking points (equals to the points you get if you stake 1 unit of asset for 1 day):

  • ETH & LST Pool (ETH, stETH, STONE): 8. You will receive 8 points per ETH staked every 24 hours.

  • OLM Pool (OLM): 0.00024. You will receive 24 points per 10,000 OLM staked every 24 hours.

In simpler words:

  • You accumulate points over time when you stake into ORA Points Program Staking.

  • Points are updated and awarded every day.

  • You can stake multiple assets and earn points for all of them.

opML

Introduction

opML (Optimistic Machine Learning), invented and developed by ORA, introduces a groundbreaking approach to integrating machine learning with blockchain technology. By leveraging similar principle of optimistic rollups, opML ensures the validity of computations in a decentralized manner. This framework enhances transparency and fosters trust in machine learning inference by allowing for onchain verification of AI computation.

Architecture

OpML is comprised from the following key components:

  1. Fraud Proof Virtual Machine (Off-chain VM): A robust off-chain engine responsible for executing machine learning inference. This component executes machine learning inference, generating new VM states as outputs. When discrepancies occur, manifested as different VM states, the MIPS VM employs a bisection method to pinpoint the exact step, or instruction, where the divergence begins.

  2. opML Smart Contracts (On-chain VM) : Utilized for the verification of computational results, ensuring the accuracy of the off-chain computation. These contracts allow the execution of a single MIPS instruction, enabling the on-chain environment to verify specific steps in the computation process. This capability is vital for resolving disputes and ensuring the integrity of the off-chain computation.

  3. Fraud Proofs: In the event of a dispute, fraud proofs generated by the verifier serve as conclusive evidence, illustrating the discrepancy in computation and facilitating the resolution process through the opML smart contracts.

Verification Game

Verification game is the process where two or more parties are assumed to execute the same program. Then, the parties can challenge each other with a pinpoint style to locate the disputable step. This step is sent to the smart contract for the verification.

For the system to work as intended it's important to ensure:

  • Deterministic ML execution

    opML ensures consistent ML execution by using fixed-point arithmetic and software-based floating-points, eliminating randomness and achieving deterministic outcomes with a state transition function.

  • Separate Execution from Proving

    opML utilizes a dual-compilation method: one for optimized native execution and another for fraud-proof VM instructions for secure verification. This ensures both fast execution and reliable, machine-independent proof.

  • Efficiency of AI model inference in VM

    The existing fraud proof systems that are widely adopted in the optimistic rollup systems need to cross-compile the whole computation into fraud proof VM instructions, which will result in inefficient execution and huge memory consumption. opML proposes a novel multi-phase protocol, which allows semi-native execution and lazy loading, which greatly speeds up the fraud proof process.

The whole opML process includes the following steps:

  1. The requester first initiates an ML service task.

  2. The server then finishes the ML service task and commits results on chain.

  3. The verifier will validate the results. Suppose there exists a verifier who declares the results are wrong. It starts a verification game with verification game (bisection protocol) with the server and tries to disprove the claim by pinpointing one concrete erroneous step.

  4. Finally, arbitration about a single step will be conducted on smart contract.

Multi-phase Verification Game

Represents an extension of single-phase verification game, which allows for a better utilization of computing resources.

Single phase verification game cross-compiles the whole ML inference code into the Fraud Proof VM instructions. This method is less efficient than the native execution (doesn't utilize the full potential of GPU/TPU acceleration and parallel processing). The Fraud Proof VM also has limited memory, which prevents loading of large models into the memory directly.

To address the issues above, multi-phase verification game introduces the following properties:

  • Semi-Native Execution With the multi-phase design, we only need to conduct the computation in the VM only in the final phase, resembling the single-phase protocol. For other phases, we have the flexibility to perform computations that lead to state transitions in the native environment, leveraging the capabilities of parallel processing in CPU, GPU, or even TPU. By reducing the reliance on the VM, we significantly minimize overhead, resulting in a remarkable enhancement in the execution performance of opML, almost akin to that of the native environment

  • Lazy Loading Design To optimize the memory usage and performance of the fraud proof VM, we implement a lazy loading technique. This means that we do not load all the data into the VM memory at once, but only the keys that identify each data item. When the VM needs to access a specific data item, it uses the key to fetch it from the external source and load it into the memory. Once the data item is no longer needed, it is swapped out of the memory to free up space for other data items. This way, we can handle large amounts of data without exceeding the memory capacity or compromising the efficiency of the VM.

Further readings

  • Detailed explanation of opML can be found in our research paper.

  • Check out ORA's open-source implementation repository.

Multi-phase verification game

FAQ

Frequently Asked Questions

Is ORA a rollup or layer 2?

Neither.

ORA is verifiable oracle protocol. ORA network looks like a "layer 2" on a typical blockchain network. However, it doesn't scale smart contracts' computation, but extends smart contracts' features. An actual rollup requires a bridge or an aggregator.

Is ORA "adding new features" to existing oracles, or "rebuilding and replacing"?

Rebuilding and replacing.

We are the first and your last oracle.

Is ORA an oracle?

Yes.

ORA is the verifiable oracle protocol.

Will all codes of ORA be open-sourced?

Yes.

Why is ORA trustless, not trust-minimized/trustful?

Strictly speaking, nothing is trustless because you still have to trust some fundamental rules like Math or Cryptography. But we still use the term of trustless to demonstrate the trustworthiness, security, verifiability, and decentralization of our network. Trust-minimized is still a good word that describes ORA.

What is the relationship between World Supercomputer and ORA?

Simply put, ORA is a key network of the entire World Supercomputer network. In addition, World Supercomputer is also served by Ethereum as the consensus network and Storage Rollup as the storage network.

What is the difference between World Supercomputer and Rollups?

The main difference between modular blockchain (including L2 Rollup) and world computer architecture lies in their purpose: Modular blockchain is designed for creating a new blockchain by selecting modules (consensus, DA, settlement, and execution) to put together into a modular blockchain; while World Supercomputer is designed to establish a global decentralized computer/network by combining networks (base layer blockchain, storage network, computation network) into a world computer.

How does opML guarantee consistency, given ML models are non-deterministic?

The model will have to using deterministic inference (learn more from this talk on determinism on ML), either using the built in Nvidia deterministic feature, or move the model into our deterministic vm (recommended for better support).

How does AI Oracle handle large responses such as ones to generate videos or images?

The generated content will be stored on decentralized storage network, eg. IPFS.

For IPFS, you can retrieve them with IPFS gateway with the given id from AI Oracle.

What's the challenge time of opML? 7 days?

Normally, optimistic rollups choose 7 days as their challenge period.

For opML the challenge period can be shorter, because it is not a rollup, which involves a lot of financial operations and maintains a public ledger. When optimized, the challenge period can be like a sovereign rollup of a few minutes or even a few seconds.

Are fraud proofs "slower" than zk proofs?

As mentioned before, as long as the challenge time of optimistic fraud proof is shorter than proof generation of zk, fraud proofs are faster than zk proofs.

It needs to be noted that zkML for huge AI models are not possible, or the zk proof generation of zkML approach is much slower than opML approach.

I want to build with ORA's onchain AI, what can I choose?

You have multiple options for building ORA's onchain AI:

  • zkML of keras2circom (the most battle-tested and performant zkML framework)

  • opML of AI Oracle (run huge model like LlaMA2-7B and Stable Diffusion now)

  • zk+opML with opp/ai (futuristic onchain AI fuses zkML's privacy and opML's scalability)

We recommend you build with AI Oracle with opML, because it is production-ready and out-of-the-box, with support to LlaMA2-7B and Stable Diffusion.

What is the limitation of opML?

Privacy, because all models in opML needs to be public and open-source for network participants to challenge.

That is why we came up with opp/ai to add privacy feature to opML.

What is the proving overhead, performance, and limitations of other zkML frameworks?

Here are some facts:

  • Modulus Labs used zkML to bring GPT2-1B onchain with 1m+ times overhead (200+ hours for each call), 128-core CPU and 1TB RAM.

  • The zkML framework EZKL takes around 80 minutes to generate a proof of a 1M-nanoGPT model.

  • According to Modulus Labs, zkML has >>1000 times overhead than pure computation, with the latest report being 1000 times, and 200 times for tiny model.

  • According to EZKL’s benchmark, the average proving time of RISC Zero is of 173 seconds for Random Forest Classification.

You can also check out this Ethereum Foundation granted benchmark on our zkML framework with others.

What does the OAO fee consist of?

OAO fee = Model Fee (for LlaMA2 or Stable Diffusion) + Callback Fee (for node to submit inference result back to onchain) + Network Fee (aka. transaction fee of networks like Ethereum)

Callback fee and network fee may be higher when network is experiencing higher traffic.

Callback fee may be lower if you are using model such as Stable Diffusion, because the inference result will be shorter (just an IPFS hash, instead of long paragraphs in LLM).

Why interact with OAO on Ethereum mainnet is expensive?

With OAO fee's structure, callback fee usually takes up major portion of the overall fee, because storing data of inference result on Ethereum is expensive.

You can try use Stable Diffusion model with OAO on Ethereum mainnet (because the callback fee will be lower), or use OAO on other networks, for lower cost.

...still got questions?

Reach out in our Discord.

Glossary

Aggregation

An operation that brings together multiple elements

AssemblyScript

A TypeScript-like language for WebAssembly.

Automation

A type of middleware that performs operations without human control. Commonly used in blockchain due to smart contracts' inability to trigger automatic functions and DApps' need for periodic calls.

CLE

Computational Entity. The customizable and programmable software defined by developers and the underlying oracle network running those software.

Client

A software for operating a node. Usually developed by the network's core developer team.

Dispute Period

A time window for finalizing disagreement on computation. Usually takes weeks. Used in traditional oracle networks.

Ethereum PoS

Refers to Ethereum's consensus algorithm.

Filtering

An operation that rejects unwanted elements

Finality

Fisherman Mechanism

A staking mechanism involves "fisherman" to check integrity of nodes and raise dispute, and "arbitrator" to decide dispute outcome.

GraphQL

An approach for querying data. Commonly used in the front-end of DApps.

Halo2

A high-performance implementation of zk-SNARK by Electric Coin Co.

Hyper Oracle

The former naming format of HyperOracle. Use HyperOracle.

Indexing

A type of middleware that fetches and organizes data. Commonly used in blockchain due to blockchain data's unique storage model.

IMO

Initial Model Offering (IMO) is a mechanism for tokenizing an open-source AI model. Through its revenue sharing model, IMO fosters transparent and long-term contributions to any AI model.

The process works as follows:

  1. IMO launches an ERC-20 token (more specifically, ERC-7641 Intrinsic RevShare Token) of any AI model to capture its long-term value.

  2. Anyone who purchases the token becomes one of the owners of this AI model.

  3. Token holders share revenue generated from onchain usage of the tokenized AI model.

JavaScript

A programming language for Web.

Latency

A delay after outputting execution result caused by proof generation or dispute period.

Mapping

An operation that associates input elements with output elements.

Middleware

Services or infrastructure needed in pipeline of development.

Node

A computer running client.

OAO

Onchain AI Oracle (OAO) is an oracle system which provides verifiable AI inference to smart contracts.

The system consists of 2 parts:

  • Set of smart contracts - Any dapp can request AI inference by calling OAO smart contracts. Oracle nodes listen to the events emitted by the OAO smart contracts and execute AI inference. Upon the successful execution, the results are returned in the callback function.

On/Off-chain

On the (specific) blockchain / not on the (specific) blockchain. Usually refers to data or computation.

opML

Optimistic Machine Learning (opML) is a machine learning proving framework. In uses game-theory to ensure the validity of AI inference results. The proving mechanism works similar to optimistic rollups approach.

opp/ai

Optimistic Privacy Preserving AI (opp/ai) is a machine learning proving framework. It combines cryptography and game-theory to ensure the validity of AI inference results.

ORA

Name of our project. Previously HyperOracle.

Oracle

A component for processing data in DApp development. Can be input oracle (input off-chain data to on-chain), output oracle (output on-chain data to off-chain), and I/O oracle (output oracle, then input oracle).

Programmable

Able to be customized and defined by code.

Proof Generation

A process of producing zero-knowledge proof. Usually takes much longer than execution only.

Recursive Proof

A ZK proof that verifies other ZK proofs, for compressing more knowledge into a single proof.

Slashing

A process that involves the burning of staked token for a node with bad behavior.

Staking

A required process that involves the depositing and locking of token for a new-entered node into network.

Subgraph

Codes that define and configs indexing computation of The Graph's indexer.

Subgraph-Equivalence

Complete alignment with the Subgraph specification and syntax.

Succinct

Able to be verified easily with less computation resources than re-execution. One important quality of zero-knowledge proof.

Trust Model

A model for analyzing trust assumptions and security. See Vitalk's post trust model.

Trustless

Able to trust without relying on third-party.

TypeScript

A Strongly-typed JavaScript.

Validity

Refers to correctness of computation.

Validity Proof

A type of zero-knowledge proof that does not utilize its privacy feature, but succinctness property.

Verifiable

Able to be checked or proved to be correct.

Verification

A computation to check if proof is correct. If verification is passed, it means proven statement is correct and finalized.

Verifier

A role that requires prover to convince with proof. Can be a person, a smart contract in other blockchains, a mobile client, or a web client.

WASM

A binary code format or programming language. Commonly used in Web.

World Supercomputer

A global P2P network linking three topologically heterogeneous networks (Ethereum, ORA, Storage Rollup) with zero-knowledge proof.

Zero-knowledge Proof

A cryptographic method for proving. Commonly used in blockchain for privacy and scaling solutions. Commonly misused for succinctness property only.

zk-SNARK

A commonly-used zero-knowledge proof cryptography.

zkAutomation

A trustless automation CLE Standard with zk. Aka. ZK Automation

zkEVM

A zkVM with EVM instruction set as bytecode.

zkML

Zero-Knowledge Machine Learning (zkML) is a machine learning proving framework. In uses cryptography to ensure the validity of AI inference results.

zkVM

A virtual machine with zk that generates zk proofs.

Reached when zk proof is verified, or dispute period is passed, and data becomes fully immutable and constant. See .

More information can be found on page.

Network of nodes - Nodes execute AI inference and return results back to the blockchain. Validity of the result is proven through one of the proving frameworks: , , .

For more information check page.

For more information check page.

For more information check page.

For more information check page.

L2Beat's definition
Initial Model Offering
zkML
opML
opp/ai
OAO introduction
opML
opp/ai
zkML

Interaction with OAO - Tutorial

This tutorial will help you understand the structure of Onchain AI Oracle (OAO), guide you through the process of building a simple Prompt contract that interacts with OAO. We will implement the contract step by step. At the end we will deploy the contract to the blockchain network and interact with it.

If you prefer a video version of the tutorial, check it here.

Final version of the code can be found here.

Learning Objectives

  • Setup development environment

  • Understand the project setup and template repository structure

  • Learn how to interact with the OAO and build an AI powered smart contract

Prerequisites

To follow this tutorial you need to have Foundry and git installed.

Setup

  1. Clone template repository and install submodules

git clone -b OAO_interaction_tutorial [email protected]:ora-io/Interaction_With_OAO_Template.git --recursive
  1. Move into the cloned repository

cd Interaction_With_OAO_Template
  1. Copy .env.example, rename it to .env. We will need these env variables later for the deployment and testing. You can leave them empty for now.

cp .env.example .env
  1. Install foundry dependencies

forge install

Creating Prompt contract

1. Import dependencies

At the beginning we need to import several dependencies which our smart contract will use.

import "OAO/contracts/interfaces/IAIOracle.sol";
import "OAO/contracts/AIOracleCallbackReceiver.sol";

IAIOracle - interface that defines a requestCallback method that needs to be implemented in the Prompt contract AIOracleCallbackReceiver - an abstract contract that contains an instance of AIOracle and implements a callback method that needs to be overridden in the Prompt contract

2. Write the code

We'll start by implementing the constructor, which accepts the address of the deployed AIOracle contract.

constructor(IAIOracle _aiOracle) AIOracleCallbackReceiver(_aiOracle){}

Now let’s define a method that will interact with the OAO. This method takes 2 parameters, id of the model and input prompt data. It also needs to be payable, because a user needs to pass the fee for the callback execution.

function calculateAIResult(uint256 modelId, string calldata prompt) payable external {
    bytes memory input = bytes(prompt);
    bytes memory callbackData = bytes("");
    address callbackAddress = address(this);
    uint256 requestId = aiOracle.requestCallback{value: msg.value}(
        modelId, input, callbackAddress, callbackGasLimit[modelId], callbackData
    );
}

In the code above we do the following:

  1. Convert input to bytes

  2. Call the requestCallback function with the following parameters:

    • modelId: ID of the AI model in use.

    • input: User-provided prompt.

    • callbackAddress: The address of the contract that will receive OAO's callback.

    • callbackGasLimit[modelId]: Maximum amount of that that can be spent on the callback, yet to be defined.

    • callbackData: Callback data that is used in the callback.

Next step is to define the mapping that keeps track of the callback gas limit for each model and set the initial values inside the constructor. We’ll also define a modifier so that only the contract owner can change these values.

address owner;

modifier onlyOwner() {
    require(msg.sender == owner, "Only owner");
    _;
}

mapping(uint256 => uint64) public callbackGasLimit;

function setCallbackGasLimit(uint256 modelId, uint64 gasLimit) external onlyOwner {
    callbackGasLimit[modelId] = gasLimit;
}

constructor(IAIOracle _aiOracle) AIOracleCallbackReceiver(_aiOracle) {
    owner = msg.sender;
    callbackGasLimit[50] = 500_000; // Stable-Diffusion
    callbackGasLimit[11] = 5_000_000; // Llama
}

We want to store all the requests that happened, so we create a data structure for the request data and the mapping between requestId and the request data.

event promptRequest(
    uint256 requestId,
    address sender, 
    uint256 modelId,
    string prompt
);

struct AIOracleRequest {
    address sender;
    uint256 modelId;
    bytes input;
    bytes output;
}

mapping(uint256 => AIOracleRequest) public requests;

function calculateAIResult(uint256 modelId, string calldata prompt) payable external {
    bytes memory input = bytes(prompt);
    bytes memory callbackData = bytes("");
    address callbackAddress = address(this);
    uint256 requestId = aiOracle.requestCallback{value: msg.value}(
        modelId, input, callbackAddress, callbackGasLimit[modelId], callbackData
    );
    AIOracleRequest storage request = requests[requestId];
    request.input = input;
    request.sender = msg.sender;
    request.modelId = modelId;
    emit promptRequest(requestId, msg.sender, modelId, prompt);
}

In the code snippet above we added prompt, sender and the modelId to the request and also emitted an event.

Now that we implemented a method for interaction with the OAO, let's define a callback that will be invoked by the OAO after the computation of the result.

event promptsUpdated(
    uint256 requestId,
    uint256 modelId,
    string input,
    string output,
    bytes callbackData
);

mapping(uint256 => mapping(string => string)) public prompts;

function getAIResult(uint256 modelId, string calldata prompt) external view returns (string memory) {
    return prompts[modelId][prompt];
}

function aiOracleCallback(uint256 requestId, bytes calldata output, bytes calldata callbackData) external override onlyAIOracleCallback() {
    AIOracleRequest storage request = requests[requestId];
    require(request.sender != address(0), "request does not exist");
    request.output = output;
    prompts[request.modelId][string(request.input)] = string(output);
    emit promptsUpdated(requestId, request.modelId, string(request.input), string(output), callbackData);
}

We've overridden the callback function from the AIOracleCallbackReceiver.sol. It's important to use the modifier, so that only OAO can callback into our contract.

Function flow:

  1. First we check if the request with provided id exists. If it does we add the output value to the request.

  2. Then we define prompts mapping that stores all the prompts and outputs for each model that we use.

  3. At the end we emit an event that the prompt has been updated.

Notice that this function takes callbackData as the last parameter. This parameter can be used to execute arbitrary logic during the callback. It is passed during requestCallback call. In our simple example, we left it empty. Finally let's add the method that will estimate the fee for the callback call.

function estimateFee(uint256 modelId) public view returns (uint256) {
    return aiOracle.estimateFee(modelId, callbackGasLimit[modelId]);
}

With this we finished with the source code for our contract. The final version should look like this:

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.13;

import "OAO/contracts/interfaces/IAIOracle.sol";
import "OAO/contracts/AIOracleCallbackReceiver.sol";

contract Prompt is AIOracleCallbackReceiver {
    
    event promptsUpdated(
        uint256 requestId,
        uint256 modelId,
        string input,
        string output,
        bytes callbackData
    );

    event promptRequest(
        uint256 requestId,
        address sender, 
        uint256 modelId,
        string prompt
    );

    struct AIOracleRequest {
        address sender;
        uint256 modelId;
        bytes input;
        bytes output;
    }

    address owner;

    modifier onlyOwner() {
        require(msg.sender == owner, "Only owner");
        _;
    }

    mapping(uint256 => AIOracleRequest) public requests;

    mapping(uint256 => uint64) public callbackGasLimit;

    constructor(IAIOracle _aiOracle) AIOracleCallbackReceiver(_aiOracle) {
        owner = msg.sender;
        callbackGasLimit[50] = 500_000; // Stable-Diffusion
        callbackGasLimit[11] = 5_000_000; // Llama
    }

    function setCallbackGasLimit(uint256 modelId, uint64 gasLimit) external onlyOwner {
        callbackGasLimit[modelId] = gasLimit;
    }

    mapping(uint256 => mapping(string => string)) public prompts;

    function getAIResult(uint256 modelId, string calldata prompt) external view returns (string memory) {
        return prompts[modelId][prompt];
    }

    function aiOracleCallback(uint256 requestId, bytes calldata output, bytes calldata callbackData) external override onlyAIOracleCallback() {
        AIOracleRequest storage request = requests[requestId];
        require(request.sender != address(0), "request does not exist");
        request.output = output;
        prompts[request.modelId][string(request.input)] = string(output);
        emit promptsUpdated(requestId, request.modelId, string(request.input), string(output), callbackData);
    }

    function estimateFee(uint256 modelId) public view returns (uint256) {
        return aiOracle.estimateFee(modelId, callbackGasLimit[modelId]);
    }

    function calculateAIResult(uint256 modelId, string calldata prompt) payable external {
        bytes memory input = bytes(prompt);
        bytes memory callbackData = bytes("");
        address callbackAddress = address(this);
        uint256 requestId = aiOracle.requestCallback{value: msg.value}(
            modelId, input, callbackAddress, callbackGasLimit[modelId], callbackData
        );
        AIOracleRequest storage request = requests[requestId];
        request.input = input;
        request.sender = msg.sender;
        request.modelId = modelId;
        emit promptRequest(requestId, msg.sender, modelId, prompt);
    }
}

Deployment and interaction with the contract

Foundry version

  1. Add your PRIVATE_KEY, RPC_URL and ETHERSCAN_KEY to .env file. Then source variables in the terminal.

    source .env

  2. Create a deployment script

    Go to Reference page and find the OAO_PROXY address for the network you want to deploy to.

    Then open script/Prompt.s.sol and add the following code:

    // SPDX-License-Identifier: MIT
    pragma solidity ^0.8.13;
    
    import {Script} from "forge-std/Script.sol";
    import {Prompt} from "../src/Prompt.sol";
    import {IAIOracle} from "OAO/contracts/interfaces/IAIOracle.sol";
    
    contract PromptScript is Script {
        address OAO_PROXY;
    
        function setUp() public {
            OAO_PROXY = [OAO_PROXY_address_here];
        }
    
        function run() public {
            uint privateKey = vm.envUint("PRIVATE_KEY");
            vm.startBroadcast(privateKey);
            new Prompt(IAIOracle(OAO_PROXY));
            vm.stopBroadcast();
        }
    }

  3. Run the deployment script

    forge script script/Prompt.s.sol --rpc-url $RPC_URL --broadcast --verify --etherscan-api-key $ETHERSCAN_KEY

  4. Once the contract is deployed and verified, you can interact with it. Go to blockchain explorer for your chosen network (eg. Etherscan), and paste the address of the deployed Prompt contract.

    Let's use Stable Diffusion model (id = 50).

    1. First call estimateFee method to calculate fee for the callback.

    2. Then request AI inference from OAO by calling calculateAIResult method. Pass the model id and the prompt for the image generation. Remember to provide estimated fee as a value for the transaction.

    3. After the transaction is executed, and the OAO calculates the result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ipfs). To check the image go to https://ipfs.io/ipfs/[Replace_your_CID].

Remix version

  1. Install the browser wallet if you haven't already (eg. Metamask)

  2. Open your solidity development environment. We'll use Remix IDE.

  3. Copy the contract along with necessary dependencies to Remix.

  4. Choose the solidity compiler version and compile the contract to the bytecode

  5. Deploy the compiled bytecode Once we compiled our contract, we can deploy it.

    1. First go to the wallet and choose the blockchain network for the deployment.

    2. To deploy the Prompt contract we need to provide the address of already deployed AIOracle contract. You can find this address on the reference page. We are looking for OAO_PROXY address.

    3. Deploy the contract by signing the transaction in the wallet.

  6. Once the contract is deployed, you can interact with it. Remix supports API for interaction, but you can also use blockchain explorers like Etherscan. Let's use Stable Diffusion model (id = 50).

Conclusion

In this tutorial we covered a step by step writing of a solidity smart contract that interacts with ORA's Onchain AI Oracle. Then we compiled and deployed our contract to the live network and interacted with it. In the next tutorial, we will extend the functionality of the Prompt contract to support AI generated NFT collections.

remix explorer
remix compiler
deployment

First call estimateFee method to calculate fee for the callback.

Then request AI inference from OAO by calling `calculateAIResult` method. Pass the model id and the prompt for the image generation. Remember to provide estimated fee as a value for the transaction.

After the transaction is executed, and the OAO calculates result, you can check it by calling prompts method. Simply input model id and the prompt you used for image generation. In the case of Stable Diffusion, the output will be a CID (content identifier on ipfs).

Interact and build with OAO

Retrieve Historical AI Inference

If you want to retrieve your historical AI inference (eg. AIGC image), you can find them on blockchain explorer:

  1. Find your transaction for sending AI request, and enter the "Logs" tab. Example tx: https://etherscan.io/tx/0xfbfdb2efcee23197c5ea8487368a905385c84afdc465cab43bc1ad01da773404#eventlog

  2. Access your requestId in "Logs" tab Example's case: "1928".

  3. In OAO's smart contract, look for the Invoke Callback transaction with the same requestId. Normally, this transaction will be around the same time as the one in step 1. To filter transactions by date, click "Advanced Filter" and then the button near "Age".

  4. Find the transaction for AI inference result, and enter the "Logs" tab. Example tx: https://etherscan.io/tx/0xfbfdb2efcee23197c5ea8487368a905385c84afdc465cab43bc1ad01da773404#eventlog

  5. Access output data. Example's case: "QmecBGR7dD7pRtY48FEKoeLVsmBTLwvdicWRkX9xz2NVvC" which is the IPFS hash that can be accessed with IPFS gateway)

Integration into OAO - Tutorial

Bring Your Own Model into OAO

In this tutorial we explain how to integrate your own AI model into Onchain AI Oracle (OAO). We will start by looking at mlgo repository and trying to understand what's happening there. At the end we will showcase how opML works, by running a simple dispute game script inside a docker container.

Learning Objectives

  • Understand how to transform AI model and inference code in order to integrate it into Onchain AI Oracle (OAO).

  • Execute a simple dispute game and understand the process of AI inference verification.

Prerequisites

  • git installed

Setup

  1. Clone mlgo repository

git clone [email protected]:OPML-Labs/mlgo.git
  1. Navigate to cloned repository

cd mlgo
  1. To install the required dependencies for your project, run the following command:

pip install -r requirements.txt

If there are some missing dependencies, make sure to install them in your Python environment.

Train the model using Pytorch

First we need to train a DNN model using Pytorch. The training part is shown in examples/mnist/trainning/mnist.ipynb.

After the training model is saved at examples/mnist/models/mnist/mnist-small.state_dict.

Convert model into ggml format

Ggml is a file format that consists of version number, followed by three components that define a large language model: the model's hyperparameters, its vocabulary, and its weights. Ggml allows for more efficient inference runs on CPU. We will now convert the Python model to ggml format by executing following steps:

  1. Position to mnist folder

cd examples/mnist
  1. Convert python model into ggml format

python3 convert-h5-to-ggml.py models/mnist/mnist-small.state_dict

In order to convert AI model written in Python to ggml format we are executing python script and providing a file that stores the model as a parameter to the script. The output is the binary file in ggml format. Note that the model is saved in big-endian, making it easy to process in the big-endian MIPS-32 VM.

Converting inference code to MIPS VM executable

Next step is to write inference code in go language. Then we will transform go binary into MIPS VM executable file.

Go supports compilation to MIPS. However, the generated executable is in ELF format. We'd like to get a pure sequence of MIPS instructions instead. To build a ML program in MIPS VM execute the following steps:

  1. Navigate to the mnist_mips directory and build go inference code

cd ../mnist_mips && ./build.sh

Build script will compile go code and then run compile.py script that will transform compiled go code to the MIPS VM executable file.

Running the dispute game

Now that we compiled our AI model and inference code into MIPS VM executable code.

We can test the dispute game process. We will use a bash script from opml repository to showcase the whole verification flow.

For this part of the tutorial we will use Docker, so make sure to have it installed.

Let's first check the content of the Dockerfile that we are using:

  1. First we need to specify the operating system that runs inside our container. In this case we're using ubuntu:22.04.

# How to run instructions:
# 1. Generate ssh command: ssh-keygen -t rsa -b 4096 -C "[email protected]"
#    - Save the key in local repo where Dockerfile is placed as id_rsa
#    - Add the public key to the GitHub account
# 2. Build docker image: docker build -t ubuntu-opml-dev .
# 3. Run the hardhat: docker run -it --rm --name ubuntu-opml-dev-container ubuntu-opml-dev bash -c "npx hardhat node"
# 4. Run the challange script on the same container: docker exec -it ubuntu-opml-dev-container bash -c "./demo/challenge_simple.sh"


# Use an official Ubuntu as a parent image
FROM ubuntu:22.04
  1. Then we need to install all the necessary dependencies in order to run dispute game script.

# Set environment variables to non-interactive to avoid prompts during package installations
ENV DEBIAN_FRONTEND=noninteractive

# Update the package list and install dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    cmake \
    git \
    golang \
    wget \
    curl \
    python3 \
    python3-pip \
    python3-venv \
    unzip \
    file \
    openssh-client \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

# Install Node.js and npm
RUN curl -fsSL https://deb.nodesource.com/setup_18.x | bash - && \
    apt-get install -y nodejs
  1. Then we configure ssh keys, so that docker container can clone all the required repositories.

# Copy SSH keys into the container
COPY id_rsa /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
# Configure SSH to skip host key verification
RUN echo "Host *\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config
  1. Position to the root directory inside docker container and clone opml repository along with its submodules.

# Set the working directory
WORKDIR /root

# Clone the OPML repository
RUN git clone [email protected]:ora-io/opml.git --recursive
WORKDIR /root/opml
  1. Lastly, we tell docker to build executables and run the challenge script.

# Build the OPML project
RUN make build

# Change permission for the challenge script
RUN chmod +x demo/challenge_simple.sh

# Default command
CMD ["bash"]

Create docker container and run the script

  1. In order to successfully clone opml repository, you need to generate a new ssh key and add it to your Github account. Once it's generated, save the key in the local repository where Dockerfile is placed as id_rsa. Then add the public key to your GitHub account.

    ssh-keygen -t rsa -b 4096 -C "[email protected]"

  2. Build the docker image

    docker build -t ubuntu-opml-dev .

  3. Run the local Ethereum node

    docker run -it --rm --name ubuntu-opml-dev-container ubuntu-opml-dev bash -c "npx hardhat node"

  4. In another terminal run the challenge script

    docker exec -it ubuntu-opml-dev-container bash -c "./demo/challenge_simple.sh"

After executing the steps above you should be able to see interactive challenge process in the console.

Script first deploys necessary contracts to the local node. Proposer opML node executes AI inference and then the challenger nodes can dispute it, if they think that the result is not valid. Challenger and proposer are interacting in order to find the differing step between their computations. Once the dispute step is found it's sent to the smart contract for the arbitration. If the challenge is successful the proposer node gets slashed.

Conclusion

In this tutorial we achieved the following:

  • converted our AI model from Python to ggml format

  • compiled AI inference code written in go to MIPS VM executable format

  • run the dispute game inside a docker container and understood the opML verification process

Next steps

In order to use your AI model onchain, you need to run your own opML nodes, then this AI model will be able to integrated into OAO. Try to reproduce this tutorial with your own model.

Reference

Models

Model ID
Model
Fee
Deployed Network

11

LlaMA 3 (8B)

Mainnet: 0.0003 ETH / 3 MATIC Testnet: 0.01 ETH

Ethereum, Optimism, Arbitrum, Manta, Linea, Base

13

OpenLM (1B)

Mainnet: 0.0003 ETH Testnet: 0.01 ETH

Ethereum

50

Stable Diffusion*

Mainnet: 0.0003 ETH / 3 MATIC Testnet: 0.01 ETH

Ethereum, Optimism, Arbitrum, Manta, Linea, Base

*: Return value is IPFS Hash (access with IPFS gateway, see example).

Deployed Addresses

Prompt and SimplePrompt are both example smart contracts interacted with OAO.

SimplePrompt saves gas by only emitting the event without storing historical data.

Ethereum Mainnet

Ethereum Sepolia

Deprecated contracts: AIOracle, Prompt.

Optimism Mainnet

Optimism Sepolia

Arbitrum Mainnet

Arbitrum Sepolia Testnet

Manta Mainnet

Manta Sepolia Testnet

Linea Mainnet

Base Mainnet

Polygon PoS Mainnet

Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Sepolia Address
Contract
Sepolia Address
Contract
Sepolia Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Sepolia Address
Contract
Sepolia Address
Contract
Sepolia Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Testnet Address
Contract
Testnet Address
Contract
Testnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Testnet Address
Contract
Testnet Address
Contract
Testnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address
Contract
Mainnet Address

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xb880D47D3894D99157B52A7F869aB3B1E2D4349d

SimplePrompt

0x61423153f111BCFB28dd264aBA8d9b5C452228D2

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xe75af5294f4CB4a8423ef8260595a54298c7a2FB

SimplePrompt

0x696c83111a49eBb94267ecf4DDF6E220D5A80129

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xC3287BDEF03b925A7C7f54791EDADCD88e632CcD

SimplePrompt

0xBC24514E541d5CBAAC1DD155187A171a593e5CF6

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0x3c8Cd1714AC9c380702D160BE4cee0D291Eb89C0

SimplePrompt

0xf6919ebb1bFdD282c4edc386bFE3Dea1a1D8AC16

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xC20DeDbE8642b77EfDb4372915947c87b7a526bD

SimplePrompt

0xC3287BDEF03b925A7C7f54791EDADCD88e632CcD

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xC3287BDEF03b925A7C7f54791EDADCD88e632CcD

SimplePrompt

0xBC24514E541d5CBAAC1DD155187A171a593e5CF6

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xBC24514E541d5CBAAC1DD155187A171a593e5CF6

SimplePrompt

0x523622DfEd0243B0DF80CC9275764B0f432D33E3

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xC20DeDbE8642b77EfDb4372915947c87b7a526bD

SimplePrompt

0x3bfD1Cc919bfeC7795b600E764aDa001b58f122a

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xC20DeDbE8642b77EfDb4372915947c87b7a526bD

SimplePrompt

0xb880D47D3894D99157B52A7F869aB3B1E2D4349d

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xC20DeDbE8642b77EfDb4372915947c87b7a526bD

SimplePrompt

0xC3287BDEF03b925A7C7f54791EDADCD88e632CcD

OAO Proxy

0x0A0f4321214BB6C7811dD8a71cF587bdaF03f0A0

Prompt

0xC20DeDbE8642b77EfDb4372915947c87b7a526bD

SimplePrompt

0xC3287BDEF03b925A7C7f54791EDADCD88e632CcD

Towards a World Supercomputer

A vision initiated by ORA

For more details, check out World Supercomputer's litepaper.

World Supercomputer is a set of topologically heterogenous peer-to-peer networks connected by a secure data bus. This is a new concept of a global network, first introduced by ORA.

While Ethereum consensus maintains its global ledger, the specialized networks scale computing power and storage capacity. For example, ORA serves the role of scaling computation and AI capabilities of the network.

ORA is committed to supporting Ethereum as the World Computer and its ecosystem for the future of DeFi, zkML, AI x Crypto, etc.

We hold regular World Supercomputer Summits. Check out the recaps:

  • World Supercomputer Summit 2023 Recap

  • World Supercomputer Community Day @Token2049 2023 Recap