Xenodia Docs

Introduction

What Xenodia is, how the gateway works, and where to begin when integrating the public API.

Xenodia is a unified AI gateway for agentic systems. It combines model access, routing, media generation, wallet-aware billing, and agent setup into one public integration surface.

Use these docs when you need to:

  • Call OpenAI-compatible chat models through Xenodia.
  • Discover available text, image, and media models before hardcoding model IDs.
  • Run synchronous or asynchronous image generation.
  • Create async Veo3.1 or Seedance 2.0 video tasks.
  • Poll task results through a normalized Xenodia task resource.
  • Bind owner and agent wallets for controlled billing.
  • Install the Xenodia CLI and skill instructions for supported agent runtimes.

Who this is for

These docs are written for three groups:

  • Product backends that need one OpenAI-compatible gateway for text and media models.
  • Agent platforms such as OpenClaw that need API keys, wallet boundaries, and model discovery without copying provider-specific docs into every agent package.
  • Skill and MCP builders who need a stable action layer for routing, billing, and marketplace-style capability installation.

Core concepts

ConceptMeaning
Model Aggregation APIOpenAI-compatible text calls plus shared image and video generation endpoints.
Model DiscoveryGET /v1/models returns the enabled model IDs, modalities, public capability metadata, pricing hints, and available channels.
Unified key and paymentRuntime calls use a Xenodia API key; billing resolves through owner, agent, or controlled fallback wallet rules.
Task resourceLong-running image and video work returns task_id, state, poll_url, and normalized result / error fields.
Skill Marketplace layerSkills should carry lightweight setup instructions and link back to docs for the full API surface.

Public surfaces

SurfacePurpose
https://www.xenodia.xyzWebsite, pricing, model pages, account and console entry.
https://docs.xenodia.xyzDeveloper documentation and API reference.
https://api.xenodia.xyzRuntime API endpoint.

How to read the docs

Start with Quickstart if you want a working request. Read Authentication before wiring a server integration. Use Model Discovery before shipping model-specific behavior.

For OpenAI-compatible clients, the shortest path is usually:

  1. Replace the base URL with https://api.xenodia.xyz.
  2. Use a Xenodia API key as the Bearer token.
  3. Query /v1/models to pick a currently enabled model ID.
  4. Send POST /v1/chat/completions with a supported model ID.

Source of truth

This frontend is independent, but it should not become a second backend. Endpoint schemas, model availability, pricing, and capability data should come from the existing Xenodia backend through OpenAPI and exported model catalog snapshots.

On this page