Active development · testnet
Back to Wiki

Wiki · AI Inference API

AI Inference API

SpaceRouter is OpenAI-compatible. Create a key, point your existing SDK at it, and you're done.

AI Inference API

SpaceRouter provides an OpenAI-compatible API for LLM inference across the network. Drop-in replacement for the OpenAI client — just swap the base_url.

Create an API key

POST /api/router/keys
Content-Type: application/json
Authorization: Bearer <jwt>

{ "name": "my-agent" }

Returns:

{ "key": "sk-...", "id": "..." }

Store the returned key securely — it is not retrievable later.

Chat completions

POST https://spacerouter.ai/v1/chat/completions
Content-Type: application/json
Authorization: Bearer sk-...

{
  "model": "meta-llama/Llama-3.1-70B-Instruct",
  "messages": [{ "role": "user", "content": "Hello" }]
}

List models

GET /api/router/models

Returns the catalogue of available models with metadata (parameter size, VRAM, category, pricing).

Using existing OpenAI SDKs

from openai import OpenAI

client = OpenAI(
    base_url="https://spacerouter.ai/v1",
    api_key=os.environ["SPACEROUTER_API_KEY"],
)
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://spacerouter.ai/v1",
  apiKey: process.env.SPACEROUTER_API_KEY,
});

For the model catalogue and per-token pricing, see /api and /pricing. For per-tier rate limits, see Rate Limits.