ORA
TwitterDiscordWebsiteMirror
  • ✨Quick Start
  • 🔘Introduction
    • About ORA
    • Use Cases
    • Key Milestones
  • 🌐Foundation
    • ORA Coin ($ORA)
    • Tokenomics
      • Interacting with ORA’s Tokenomics
  • 🤖Onchain Perpetual Agent (opAgent)
    • opAgent
  • 🔮Onchain AI Oracle (OAO)
    • Onchain AI Oracle
      • Build with AI Oracle
        • Callback Gas Limit Estimation
        • Advanced Usages of AI Oracle
        • Optimistic Machine Learning (OPML)
      • Onboard AI Models
      • Non-Developer Guides
      • References
        • AI Settlement Oracle
        • Example: Fortune Teller
    • Node Operator Guide
      • Tora Validator Client
      • Tora Launcher - Tutorial
      • Tora CLI - Tutorial
      • Troubleshooting
    • Fraud Proof Virtual Machine (FPVM) and Frameworks
      • opML
      • opp/ai
      • Comparison of Proving Frameworks
  • 💠Resilient Model Services (RMS)
    • Overview
    • ORA API
    • Model and Pricing
    • Technology
  • 🪙INITIAL MODEL OFFERING (IMO)
    • IMO Overview
    • IMO Participation Rules
    • ERC-7641: Intrinsic RevShare Token
    • ERC-7007: Verifiable AI-Generated Content Token
  • 📚Resources
    • Resources
    • FAQ
    • Glossary
    • Legal
      • Privacy Policy
      • Terms of Use
  • 📧Get in touch
    • Twitter
    • Telegram
    • GitHub
Powered by GitBook
On this page

Was this helpful?

Export as PDF
  1. Resilient Model Services (RMS)

Model and Pricing

Supported Models and Pricing

ORA API supports a variety of open-source models, all verifiable through opML.

Price are measured in unit of $ORA.

// language model
"deepseek-ai/DeepSeek-V3" = 0.15,                          # Per 1M Tokens
"deepseek-ai/DeepSeek-R1" = 1.35,                          # Per 1M Tokens

"meta-llama/Llama-3.3-70B-Instruct" = 0.68,                # Per 1M Tokens
"meta-llama/Llama-3.2-3B-Instruct" = 0.05,                 # Per 1M Tokens
"meta-llama/Llama-2-13b-chat-hf" = 0.17,                   # Per 1M Tokens
"meta-llama/Llama-2-7b-chat-hf" = 0.15,                    # Per 1M Tokens
"meta-llama/Llama-3.1-405B-Instruct" = 2.69,                # Per 1M Tokens
"meta-llama/Llama-3.2-1B-Instruct" = 0.05,                 # Per 1M Tokens
"meta-llama/Meta-Llama-3-8B-Instruct" = 0.14,              # Per 1M Tokens

"google/gemma-2b-it" = 0.08,                               # Per 1M Tokens
"google/gemma-2-27b-it" = 0.62,                            # Per 1M Tokens
"google/gemma-2-9b-it" = 0.23,                             # Per 1M Tokens
"mistralai/Mistral-7B-Instruct-v0.3" = 0.15,               # Per 1M Tokens
"mistralai/Mixtral-8x22B-Instruct-v0.1" = 0.92,             # Per 1M Tokens
"mistralai/Mistral-7B-Instruct-v0.2" = 0.15,               # Per 1M Tokens
"mistralai/Mixtral-8x7B-Instruct-v0.1" = 0.46,             # Per 1M Tokens
"mistralai/Mistral-7B-Instruct-v0.1" = 0.15,               # Per 1M Tokens

"Qwen/QwQ-32B-Preview" = 0.92,                             # Per 1M Tokens
"Qwen/Qwen2.5-Coder-32B-Instruct" = 0.62,                  # Per 1M Tokens
"Qwen/Qwen2.5-72B-Instruct" = 0.92,                        # Per 1M Tokens
"Qwen/Qwen2-72B-Instruct" = 0.96,                          # Per 1M Tokens

// image generation model
"black-forest-labs/FLUX.1-dev" = 0.020,                    # Per 1M Pixels @ 28 Steps
"black-forest-labs/FLUX.1-canny" = 0.020,                  # Per 1M Pixels @ 28 Steps
"black-forest-labs/FLUX.1-redux-dev" = 0.020,              # Per 1M Pixels @ 28 Steps
"black-forest-labs/FLUX.1-schnell" = 0.006,                # Per 1M Pixels @ 4 Steps

"stabilityai/stable-diffusion-3.5-large" = 0.05,           # Per Image
"stabilityai/stable-diffusion-3.5-large-turbo" = 0.03,     # Per Image
"stabilityai/stable-diffusion-3-medium" = 0.03,            # Per Image
"stabilityai/stable-diffusion-3.5-medium" = 0.03,          # Per Image

// video generation model
"KumoAnonymous/KumoVideo-Turbo" = 1,                       # Per Video

PreviousORA APINextTechnology

Last updated 3 months ago

Was this helpful?

💠