Xenodia Docs
API Reference

Chat Completions API

Use Xenodia's OpenAI-compatible chat endpoint for server-side text generation and agent workflows.

Use POST /v1/chat/completions for OpenAI-compatible text generation through Xenodia.

Endpoint

POST https://api.xenodia.xyz/v1/chat/completions

Authentication

Authorization: Bearer YOUR_LONG_TERM_KEY

Minimal request

{
  "model": "openai/gpt-4o-mini",
  "messages": [
    {
      "role": "user",
      "content": "Reply with OK only."
    }
  ]
}

cURL

curl -X POST "https://api.xenodia.xyz/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $XENODIA_API_KEY" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [
      { "role": "system", "content": "You are a precise test assistant." },
      { "role": "user", "content": "Reply with OK only." }
    ],
    "temperature": 0
  }'

Success response

{
  "id": "chatcmpl-xxx",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "OK"
      }
    }
  ],
  "usage": {
    "prompt_tokens": 18,
    "completion_tokens": 1,
    "total_tokens": 19
  }
}

Compatibility

The request shape intentionally follows OpenAI Chat Completions. Most server-side clients can migrate by changing the base URL and API key, then selecting a Xenodia model ID from Model Discovery.

The current gateway response exposes the normalized fields it persists from the upstream response: id, choices[].message, and usage. Do not require optional OpenAI fields such as object, created, choices[].index, or choices[].finish_reason unless your own adapter adds them.

Production guidance

  • Query /v1/models before hardcoding model IDs.
  • Treat provider-specific parameters as optional unless the model capability data exposes them.
  • Keep retries conservative for non-idempotent workflows.
  • Log request IDs and status codes, not raw prompts or API keys.

On this page