ORA API

Developer Portal: https://rms.ora.io/

Intro to ORA API

ORA API is an integral part of ORA's Resilient Model Services (RMS). See supported models and pricing of ORA API.

The ORA API serves as the gateway for developers to interact with RMS, providing a decentralized, verifiable, and resilient platform for AI computations.

Advantages of ORA API are the best support for the most AI models in crypto; competitive pricing and predictable onchain cost; verifiable AI inference service; OpenAI compatibility...

Integrate with ORA API

Getting Started

Before using the ORA API, developers need to:

  1. Obtain API Key: Register for an ORA API key through developer portal for authenticating requests.

  2. Set Up Environment: Ensure you have Python installed if you're using the SDK, or a tool like cURL for direct API calls.

ORA API Key Design

  1. Smart Contract Management

    1. All API keys are managed directly on the smart contract. This includes both the generation and deletion processes of API keys.

  2. API Key Structure

    1. We do not store your API keys. Instead, the API key is generated through signing a message from the smart contract. The API key has the following structure, which enables you to recover it within your local environment: {blockchain identifier}:{base58encoding(message:signature(message, privateKey))}

    2. Here, the message is a bytes32 data type within the smart contract.

  3. Key Generation and Deletion Mechanisms

    1. Generating a new API key is analogous to creating a new address from a wallet mnemonic. By monotonically increasing the value of the message, you can obtain a new API key. As for API key deletion, it is achieved by disabling the key within the smart contract.

Use ORA API

Chat Completion

POST https://api.ora.io/v1/chat/completions

Generate text responses from ORA API AI models.

Headers

Name
Value

Content-Type

application/json

Authorization

Bearer <ORA_API_KEY>

Body

Name
Type
Description

model

string

Name of AI model

messages

array

Content for AI model to process

Example in shell

curl -X POST "https://api.ora.io/v1/chat/completions" \
  -H "Authorization: Bearer $ORA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-ai/DeepSeek-V3",
    "messages": [{"role": "user", "content": "What are some fun things to do in New York?"}]
  }'

Image Generation

POST https://api.ora.io/v1/images/generations

Generate images based on text prompts.

Headers

Name
Value

Content-Type

application/json

Authorization

Bearer <ORA_API_KEY>

Body

Name
Type
Description

model

string

Name of AI model

prompt

string

Prompt for AI model to process

steps

integer

Number of diffusion steps AI model will take during generation

n

integer

number of images to generate based on prompt

Example in shell

curl -X POST "https://api.ora.io/v1/images/generations" \
  -H "Authorization: Bearer $ORA_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "black-forest-labs/FLUX.1-dev",
    "prompt": "Cats eating popcorn",
    "steps": 10,
    "n": 4
  }'

The generated images are stored on IPFS.

SDK Integration

ORA API is designed to be compatible with the OpenAI SDK, facilitating a smooth transition for developers already familiar with it:

import openai

# Define your query
system_content = "You are a helpful assistant."
user_content = "What are some fun things to do in New York?"

# Set your ORA API key
ORA_API_KEY = "YOUR_ORA_API_KEY"

# Initialize the client
client = openai.OpenAI(
    api_key=ORA_API_KEY,
    base_url="https://api.ora.io/v1",
)

# Perform a chat completion
chat_completion = client.chat.completions.create(
    model="deepseek-ai/DeepSeek-V3",
    messages=[
        {"role": "system", "content": system_content},
        {"role": "user", "content": user_content},
    ]
)

# Print the response
response = chat_completion.choices[0].message.content
print("Response:\n", response)

Supported Models and Pricing

ORA API supports a variety of open-source models, all verifiable through opML.

Price are measured in unit of $ORA.

// language models
"deepseek-ai/DeepSeek-V3" = 0.15,                          # Per 1M Tokens
"deepseek-ai/DeepSeek-R1" = 1.35,                          # Per 1M Tokens

"meta-llama/Llama-3.3-70B-Instruct" = 0.68,                # Per 1M Tokens
"meta-llama/Llama-3.2-3B-Instruct" = 0.05,                 # Per 1M Tokens
"meta-llama/Llama-2-13b-chat-hf" = 0.17,                   # Per 1M Tokens
"meta-llama/Llama-2-7b-chat-hf" = 0.15,                    # Per 1M Tokens
"meta-llama/Llama-3.1-405B-Instruct" = 2.69,                # Per 1M Tokens
"meta-llama/Llama-3.2-1B-Instruct" = 0.05,                 # Per 1M Tokens
"meta-llama/Meta-Llama-3-8B-Instruct" = 0.14,              # Per 1M Tokens

"google/gemma-2b-it" = 0.08,                               # Per 1M Tokens
"google/gemma-2-27b-it" = 0.62,                            # Per 1M Tokens
"google/gemma-2-9b-it" = 0.23,                             # Per 1M Tokens
"mistralai/Mistral-7B-Instruct-v0.3" = 0.15,               # Per 1M Tokens
"mistralai/Mixtral-8x22B-Instruct-v0.1" = 0.92,             # Per 1M Tokens
"mistralai/Mistral-7B-Instruct-v0.2" = 0.15,               # Per 1M Tokens
"mistralai/Mixtral-8x7B-Instruct-v0.1" = 0.46,             # Per 1M Tokens
"mistralai/Mistral-7B-Instruct-v0.1" = 0.15,               # Per 1M Tokens

"Qwen/QwQ-32B-Preview" = 0.92,                             # Per 1M Tokens
"Qwen/Qwen2.5-Coder-32B-Instruct" = 0.62,                  # Per 1M Tokens
"Qwen/Qwen2.5-72B-Instruct" = 0.92,                        # Per 1M Tokens
"Qwen/Qwen2-72B-Instruct" = 0.96,                          # Per 1M Tokens

// image generation models
"black-forest-labs/FLUX.1-dev" = 0.020,                    # Per 1M Pixels @ 28 Steps
"black-forest-labs/FLUX.1-canny" = 0.020,                  # Per 1M Pixels @ 28 Steps
"black-forest-labs/FLUX.1-redux-dev" = 0.020,              # Per 1M Pixels @ 28 Steps
"black-forest-labs/FLUX.1-schnell" = 0.006,                # Per 1M Pixels @ 4 Steps

"stabilityai/stable-diffusion-3.5-large" = 0.05,           # Per Image
"stabilityai/stable-diffusion-3.5-large-turbo" = 0.03,     # Per Image
"stabilityai/stable-diffusion-3-medium" = 0.03,            # Per Image
"stabilityai/stable-diffusion-3.5-medium" = 0.03,          # Per Image

Best Practices

  • Error Handling: Always implement error handling to manage API response errors or timeouts.

  • Rate Limiting: Be aware of and respect rate limits to avoid service disruptions.

  • Security: Never expose your API key in client-side code. Use server-side calls or secure environment variables.

Last updated

Was this helpful?