Frequently Asked Questions
Contents
💡All relevant information for staking can be found on Staking page
8N per N ETH (If you stake 1 ETH, you get 8 points every day). N represents the amount of ETH or other assets you staked.
You can find more information on the ORA Points Program by checking the following link – https://docs.ora.io/doc/points/staking#staking-points
In Phase 1, there is no lock up period. It takes 24 hours to take to initiate your withdrawal before you are able to claim it.
After you initiate the withdraw, you will need to submit another transaction to “claim” your withdraw. (Two transactions in total)
In Phase 2, there are 2 options, locked and flexible staking. Locked staking is subject to a 6 month locking period. With flexible staking you may withdraw at any time. Participants for locked staking will earn more points.
Read more here: https://docs.ora.io/doc/points/staking#withdraw
The contracts were audited by Salus. Reference here: https://x.com/salus_sec/status/1811658734998811005
Information about running Tora Client can be found under our Node Operator Guide.
Currently there are 2 options for running a Tora client, Tora Launcher and CLI option.
The incentive is in form of ORA points. Read more on Points page.
Each validated transaction will earn 3 points. Read more here: https://docs.ora.io/doc/points/tasks#task-4-running-validator-node
Neither.
ORA is a verifiable AI oracle network. It contains a set of smart contracts capable of making calls to a network of nodes computing AI inference, secured by opML.
The content generated by ORA's AI Oracle can be securely stored on decentralized storage networks like IPFS (InterPlanetary File System). Once stored, the files can be retrieved using the Content Identifier (CID), which is provided by ORA’s AI Oracle.
Depending on your use case, you can choose from a variety of supported AI models. Please refer to the References page, where you'll find all the essential details regarding each model, including supported blockchain networks, associated fees, and other relevant information.
AI Oracle fee = Model Fee (for LlaMA2 or Stable Diffusion) + Callback Fee (for node to submit a inference result back to onchain) + Network Fee (gas)
Callback fee and network fees may be higher when network is experiencing congestion.
Callback fees may be lower if you are using model such as Stable Diffusion, because the inference result will be shorter (just an IPFS hash, instead of long paragraphs in LLM).
ML inferences can be deterministic provided that the random seed is fixed and the inference is run using Nvidia's deterministic framework or in our deterministic VM. Learn more from this talk on determinism on ML.
Privacy, because all models in opML needs to be public and open-source for network participants to challenge. This can be mitigated with opp/ai.
zkLLM and Ligetron data from EthBelgrade conference talk.
Modulus Labs zkML bringing GPT2-1B onchain resulted in a 1m+ times overhead (200+ hours for each call), 128-core CPU and 1TB RAM.
The zkML framework with EZKL takes around 80 minutes to generate a proof of a 1M-nanoGPT model.
According to Modulus Labs, zkML has >>1000 times overhead than pure computation, with the latest report being 1000 times, and 200 times for small models.
According to EZKL’s benchmark, the average proving time of RISC Zero is of 173 seconds for Random Forest Classification.
For more details, refer to:
Ethereum Foundation's granted benchmark, which compares our zkML framework to other leading solutions in the space.
EthBelgrade conference talk, which mentions zkLLM and Ligetron data.
Unfortunately, the OG role was a limited-time opportunity exclusively for our first 100 Discord community members, and that window has closed. However, we appreciate your interest and look forward to having you as part of our community!
OLM is the first AI model launched through the IMO framework. More details here https://docs.openlm.io/olm/initial-model-offering
More questions? Reach out in our Discord.