ApiSet.ai Gateway · Quickstart

Overview

ApiSet.ai Gateway exposes an OpenAI‑compatible HTTP API so that you can access multiple model providers (DeepSeek, OpenAI, etc.) through a single entrypoint.

  • Base URL: https://apiset.ai/api/v1
  • Chat Completions: POST /chat/completions
  • Auth: Authorization: Bearer {api_set_key}
  • Protocol: Largely compatible with OpenAI chat/completions

Getting an API Key

  • Sign up and sign in on the ApiSet.ai console.
  • Open API Key Management and create a new key.
  • Copy the generated sk-... string.
  • Use it in requests via the Authorization: Bearer {api_set_key} header.

Security tip: Never hardcode your API key in frontend code or public repos. Store it in backend configuration or secure environment variables.

Non‑streaming request example

curl https://apiset.ai/api/v1/chat/completions \
  -H "Authorization: Bearer sk-xxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek:deepseek-chat",
    "messages": [
      {"role": "user", "content": "Hi, just reply with \"Hi\""}
    ],
    "temperature": 0.7,
    "max_tokens": 128,
    "stream": false
  }'

Sample response

{
  "id": "1a9f9f27-ef53-4b0e-8849-465bf8270314",
  "object": "chat.completion",
  "created": 1772183892,
  "model": "deepseek:deepseek-chat",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hi"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 8,
    "completion_tokens": 1,
    "total_tokens": 9
  }
}

Note: In requests, the model field uses the provider:modelId format (for example deepseek:deepseek-chat). The gateway parses this and routes the call to the correct upstream model.

Streaming request example

curl https://apiset.ai/api/v1/chat/completions \
  -H "Authorization: Bearer sk-xxxx" \
  -H "Content-Type: application/json" \
  -N \
  -d '{
    "model": "deepseek:deepseek-chat",
    "messages": [
      {"role": "user", "content": "Introduce yourself in one sentence"}
    ],
    "temperature": 0.7,
    "max_tokens": 256,
    "stream": true
  }'

Typical SSE chunks:

data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"content":"Hi"}}], ...}

data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"content":", I'm ApiSet.ai Gateway."}}], ...}

data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"content":""},"finish_reason":"stop"}],
       "usage":{"prompt_tokens":13,"completion_tokens":10,"total_tokens":23}}

The usage field in the final chunk is used for billing and usage statistics.

Language SDK examples

JavaScript (Node.js, fetch)

const resp = await fetch("https://apiset.ai/api/v1/chat/completions", {
  method: "POST",
  headers: {
    "Authorization": "Bearer sk-xxxx",
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    model: "deepseek:deepseek-chat",
    messages: [{ role: "user", content: "Hi" }],
    stream: false
  })
});

const data = await resp.json();
console.log(data.choices[0].message.content);

Python (requests)

import requests

url = "https://apiset.ai/api/v1/chat/completions"
headers = {
    "Authorization": "Bearer sk-xxxx",
    "Content-Type": "application/json",
}
json_data = {
    "model": "deepseek:deepseek-chat",
    "messages": [{"role": "user", "content": "Hi"}],
    "stream": False,
}

resp = requests.post(url, headers=headers, json=json_data, timeout=30)
data = resp.json()
print(data["choices"][0]["message"]["content"])

Error format

The gateway uses an OpenAI‑style error envelope, for example:

{
  "error": {
    "message": "Incorrect API key provided. For details",
    "type": "invalid_request_error",
    "param": null,
    "code": "invalid_api_key"
  },
  "request_id": "9f6b3bb8-9328-91ad-a5b7-1feed430211b"
}

Common error codes:

  • invalid_api_key: API key does not exist or is invalid.
  • model_not_found: The requested model is not configured in the gateway.
  • insufficient_quota: Account balance or quota is insufficient.