Docs
Jan Desktop
API Reference

API Reference

Jan's local API server exposes an OpenAI-compatible REST API at http://127.0.0.1:1337. Use it as a drop-in replacement for cloud APIs in any application that supports OpenAI-compatible endpoints.

Supported Endpoints

GET /v1/models

Returns a list of all models currently loaded or available in Jan.


curl http://127.0.0.1:1337/v1/models \
-H "Authorization: Bearer YOUR_API_KEY"


{
"object": "list",
"data": [
{ "id": "jan-v3-4b-base-instruct", "object": "model" }
]
}


POST /v1/chat/completions

OpenAI-compatible chat completions endpoint. Supports streaming, tool calling, and multi-turn conversations.


curl http://127.0.0.1:1337/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "jan-v3-4b-base-instruct",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}'


POST /v1/messages

Anthropic-compatible messages endpoint. Jan automatically translates requests to the internal format, so you can use Anthropic SDK clients pointed at your local server.


curl http://127.0.0.1:1337/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"model": "jan-v3-4b-base-instruct",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello!"}]
}'


OpenAI Responses API (/v1/responses) support is coming soon.

Use With Agents & Integrations

Jan's local API is designed to wire directly into AI agents and coding tools — no cloud account needed.

Using with OpenAI SDK

Point the OpenAI SDK at your local server by setting the base URL and API key:


from openai import OpenAI
client = OpenAI(
base_url="http://127.0.0.1:1337/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="jan-v3-4b-base-instruct",
messages=[{"role": "user", "content": "Hello!"}]
)


import OpenAI from "openai";
const client = new OpenAI({
baseURL: "http://127.0.0.1:1337/v1",
apiKey: "YOUR_API_KEY",
});