model, requests route to the project’s default model, or to a model in an active A/B test.
Every interaction (prompt + completion pair) is logged automatically. See Interactions for details.
Chat completions
- SDK
- OpenAI client
- HTTP
| Parameter | Type | Description |
|---|---|---|
model | str | Model key. Omit to use the project default. |
messages | list | Chat messages with role and content |
labels | dict | Key-value pairs for filtering interactions |
stream | bool | Enable streaming (default: False) |
temperature | float | Sampling temperature |
max_tokens | int | Maximum tokens to generate |
stop | list | Stop sequences |
top_p | float | Top-p sampling threshold |
session_id | str or UUID | Session ID for KV-cache reuse across turns |
store | bool | Whether to log the interaction (default: True) |
Streaming
Completion ID
Usecompletion_id to log Metrics against the response:Multimodal chat completions
Models with the Multimodal tag accept images alongside text. In the UI, attach images directly in the Chat view. For multimodal requests,content is a list of content fragments instead of a plain string. Each fragment has a type field:
text: a text fragment:{"type": "text", "text": "..."}image_url: an image fragment:{"type": "image_url", "image_url": {"url": "data:image/png;base64,..."}}
data: URIs (PNG, JPEG, GIF, or WebP, up to 10MB). HTTP URLs are not supported. You can combine multiple text and image fragments in a single message.
- SDK
- OpenAI client
- HTTP
Image format difference
Image format difference
The Adaptive SDK also accepts a flat string for
image_url:
