ORA API is an integral part of ORA's Resilient Model Services (RMS).
The ORA API serves as the gateway for developers to interact with RMS, providing a decentralized, verifiable, and resilient platform for AI computations.
Advantages of ORA API are the best support for the most AI models in crypto; competitive pricing and predictable onchain cost; verifiable AI inference service; OpenAI compatibility...
Integrate with ORA API
Getting Started
Before using the ORA API, developers need to:
Obtain API Key: Register for an ORA API key through developer portal for authenticating requests.
Set Up Environment: Ensure you have Python installed if you're using the SDK, or a tool like cURL for direct API calls.
Use ORA API
Chat Completion
POSThttps://api.ora.io/v1/chat/completions
Generate text responses from ORA API AI models.
Headers
Name
Value
Content-Type
application/json
Authorization
Bearer <ORA_API_KEY>
Body
Name
Type
Description
model
string
Name of AI model
messages
array
Content for AI model to process
Example in shell
curl-XPOST"https://api.ora.io/v1/chat/completions" \-H"Authorization: Bearer $ORA_API_KEY" \-H"Content-Type: application/json" \-d'{ "model": "deepseek-ai/DeepSeek-V3", "messages": [{"role": "user", "content": "What are some fun things to do in New York?"}] }'
Create a new user
POSThttps://api.ora.io/v1/images/generations
Generate images based on text prompts.
Headers
Name
Value
Content-Type
application/json
Authorization
Bearer <ORA_API_KEY>
Body
Name
Type
Description
model
string
Name of AI model
prompt
string
Prompt for AI model to process
steps
integer
Number of diffusion steps AI model will take during generation
ORA API is designed to be compatible with the OpenAI SDK, facilitating a smooth transition for developers already familiar with it:
import openai# Define your querysystem_content ="You are a helpful assistant."user_content ="What are some fun things to do in New York?"# Set your ORA API keyORA_API_KEY ="YOUR_ORA_API_KEY"# Initialize the clientclient = openai.OpenAI( api_key=ORA_API_KEY, base_url="https://api.ora.io/v1",)# Perform a chat completionchat_completion = client.chat.completions.create( model="deepseek-ai/DeepSeek-V3", messages=[ {"role": "system", "content": system_content}, {"role": "user", "content": user_content}, ])# Print the responseresponse = chat_completion.choices[0].message.contentprint("Response:\n", response)
Supported Models and Pricing
ORA API supports a variety of open-source models, all verifiable through opML:
// language models
"meta-llama/Llama-3.3-70B-Instruct" = 1.76, # Per 1M Tokens
"meta-llama/Llama-3.2-3B-Instruct" = 0.12, # Per 1M Tokens
"meta-llama/Meta-Llama-3-70B-Instruct" = 1.76, # Per 1M Tokens
"meta-llama/Llama-2-13b-chat-hf" = 0.44, # Per 1M Tokens
"meta-llama/Llama-2-7b-chat-hf" = 0.40, # Per 1M Tokens
"meta-llama/Llama-3.1-405B-Instruct" = 7.0, # Per 1M Tokens
"meta-llama/Llama-3.2-1B-Instruct" = 0.12, # Per 1M Tokens
"meta-llama/Meta-Llama-3-8B-Instruct" = 0.36, # Per 1M Tokens
"google/gemma-2b-it" = 0.20, # Per 1M Tokens
"google/gemma-2-27b-it" = 1.60, # Per 1M Tokens
"google/gemma-2-9b-it" = 0.60, # Per 1M Tokens
"mistralai/Mistral-7B-Instruct-v0.3" = 0.40, # Per 1M Tokens
"mistralai/Mixtral-8x22B-Instruct-v0.1" = 2.4, # Per 1M Tokens
"mistralai/Mistral-7B-Instruct-v0.2" = 0.40, # Per 1M Tokens
"mistralai/Mixtral-8x7B-Instruct-v0.1" = 1.20, # Per 1M Tokens
"mistralai/Mistral-7B-Instruct-v0.1" = 0.40, # Per 1M Tokens
"Qwen/QwQ-32B-Preview" = 2.40, # Per 1M Tokens
"Qwen/Qwen2.5-Coder-32B-Instruct" = 1.60, # Per 1M Tokens
"Qwen/Qwen2.5-72B-Instruct" = 2.40, # Per 1M Tokens
"Qwen/Qwen2-72B-Instruct" = 1.80, # Per 1M Tokens
"deepseek-ai/DeepSeek-V3" = 2.50, # Per 1M Tokens
// image generation models
"black-forest-labs/FLUX.1-dev" = 0.050, # Per 1M Pixels @ 28 Steps
"black-forest-labs/FLUX.1-canny" = 0.050, # Per 1M Pixels @ 28 Steps
"black-forest-labs/FLUX.1-redux-dev" = 0.050, # Per 1M Pixels @ 28 Steps
"black-forest-labs/FLUX.1-schnell" = 0.006, # Per 1M Pixels @ 4 Steps
"stabilityai/stable-diffusion-3.5-large" = 0.13, # Per Image
"stabilityai/stable-diffusion-3.5-large-turbo" = 0.08, # Per Image
"stabilityai/stable-diffusion-3-medium" = 0.07, # Per Image
"stabilityai/stable-diffusion-3.5-medium" = 0.07, # Per Image
Best Practices
Error Handling: Always implement error handling to manage API response errors or timeouts.
Rate Limiting: Be aware of and respect rate limits to avoid service disruptions.
Security: Never expose your API key in client-side code. Use server-side calls or secure environment variables.