Introduction
What Xenodia is, how the gateway works, and where to begin when integrating the public API.
Xenodia is a unified AI gateway for agentic systems. It combines model access, routing, media generation, wallet-aware billing, and agent setup into one public integration surface.
Use these docs when you need to:
- Call OpenAI-compatible chat models through Xenodia.
- Discover available text, image, and media models before hardcoding model IDs.
- Run synchronous or asynchronous image generation.
- Create async Veo3.1 or Seedance 2.0 video tasks.
- Poll task results through a normalized Xenodia task resource.
- Bind owner and agent wallets for controlled billing.
- Install the Xenodia CLI and skill instructions for supported agent runtimes.
Who this is for
These docs are written for three groups:
- Product backends that need one OpenAI-compatible gateway for text and media models.
- Agent platforms such as OpenClaw that need API keys, wallet boundaries, and model discovery without copying provider-specific docs into every agent package.
- Skill and MCP builders who need a stable action layer for routing, billing, and marketplace-style capability installation.
Core concepts
| Concept | Meaning |
|---|---|
| Model Aggregation API | OpenAI-compatible text calls plus shared image and video generation endpoints. |
| Model Discovery | GET /v1/models returns the enabled model IDs, modalities, public capability metadata, pricing hints, and available channels. |
| Unified key and payment | Runtime calls use a Xenodia API key; billing resolves through owner, agent, or controlled fallback wallet rules. |
| Task resource | Long-running image and video work returns task_id, state, poll_url, and normalized result / error fields. |
| Skill Marketplace layer | Skills should carry lightweight setup instructions and link back to docs for the full API surface. |
Public surfaces
| Surface | Purpose |
|---|---|
https://www.xenodia.xyz | Website, pricing, model pages, account and console entry. |
https://docs.xenodia.xyz | Developer documentation and API reference. |
https://api.xenodia.xyz | Runtime API endpoint. |
How to read the docs
Start with Quickstart if you want a working request. Read Authentication before wiring a server integration. Use Model Discovery before shipping model-specific behavior.
For OpenAI-compatible clients, the shortest path is usually:
- Replace the base URL with
https://api.xenodia.xyz. - Use a Xenodia API key as the Bearer token.
- Query
/v1/modelsto pick a currently enabled model ID. - Send
POST /v1/chat/completionswith a supported model ID.
Source of truth
This frontend is independent, but it should not become a second backend. Endpoint schemas, model availability, pricing, and capability data should come from the existing Xenodia backend through OpenAPI and exported model catalog snapshots.