Skip to main content
Send chat completions to deployed models using the Adaptive SDK, OpenAI Python library, or any HTTP client. You can also chat with models directly in the UI: open your project and click Chat. If you omit model, requests route to the project’s default model, or to a model in an active A/B test. Every interaction (prompt + completion pair) is logged automatically. See Interactions for details.

Chat completions

response = adaptive.chat.create(
    model="llama-3.1-8b-instruct",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ],
    labels={"project": "support-bot"},
)
print(response.choices[0].message.content)
ParameterTypeDescription
modelstrModel key. Omit to use the project default.
messageslistChat messages with role and content
labelsdictKey-value pairs for filtering interactions
streamboolEnable streaming (default: False)
temperaturefloatSampling temperature
max_tokensintMaximum tokens to generate
stoplistStop sequences
top_pfloatTop-p sampling threshold
session_idstr or UUIDSession ID for KV-cache reuse across turns
storeboolWhether to log the interaction (default: True)

Streaming

stream = adaptive.chat.create(
    messages=[{"role": "user", "content": "Hello!"}],
    stream=True,
)
for chunk in stream:
    if chunk.choices:
        print(chunk.choices[0].delta.content, end="", flush=True)

Completion ID

Use completion_id to log Metrics against the response:
completion_id = response.choices[0].completion_id
See SDK Reference for all chat methods.

Multimodal chat completions

Models with the Multimodal tag accept images alongside text. In the UI, attach images directly in the Chat view. For multimodal requests, content is a list of content fragments instead of a plain string. Each fragment has a type field:
  • text: a text fragment: {"type": "text", "text": "..."}
  • image_url: an image fragment: {"type": "image_url", "image_url": {"url": "data:image/png;base64,..."}}
Images must be base64-encoded data: URIs (PNG, JPEG, GIF, or WebP, up to 10MB). HTTP URLs are not supported. You can combine multiple text and image fragments in a single message.
import base64

with open("photo.png", "rb") as f:
    image_data = base64.b64encode(f.read()).decode()

response = adaptive.chat.create(
    model="your-vlm-key",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {"type": "image_url", "image_url": {"url": f"data:image/png;base64,{image_data}"}},
            ],
        }
    ],
)
The Adaptive SDK also accepts a flat string for image_url:
# Nested object (OpenAI-compatible)
{"type": "image_url", "image_url": {"url": "data:image/png;base64,..."}}

# Flat string (Adaptive shorthand)
{"type": "image_url", "image_url": "data:image/png;base64,..."}