Skip to main content

Client

Adaptive (Sync)

Adaptive(base_url: str, api_key: str | None = None, default_headers: Dict[str, str] | None = None, timeout_secs: float | None = 90.0) -> None
Instantiates a new synchronous Adaptive client bounded to a project.
parameters
  • base_url: The base URL for the Adaptive API.
  • api_key: API key for authentication. Defaults to None, in which case environment variable ADAPTIVE_API_KEY needs to be set.
  • timeout_secs: Timeout in seconds for HTTP requests. Defaults to 90.0 seconds. Set to None for no timeout.

AsyncAdaptive (Async)

AsyncAdaptive(base_url: str, api_key: str | None = None, default_headers: Dict[str, str] | None = None, timeout_secs: float | None = 90.0) -> None
Instantiates a new asynchronous Adaptive client bounded to a project.
parameters
  • base_url: The base URL for the Adaptive API.
  • api_key: API key for authentication. Defaults to None, in which case environment variable ADAPTIVE_API_KEY needs to be set.
  • timeout_secs: Timeout in seconds for HTTP requests. Defaults to 90.0 seconds. Set to None for no timeout.

Resources

Ab tests

Resource to interact with AB Tests. Access via adaptive.ab_tests

cancel

cancel(key: str) -> str
Cancel an ongoing AB test.
parameters
  • key: The AB test key.

create

create(ab_test_key: str, feedback_key: str, models: List[str], traffic_split: float = 1.0, feedback_type: Literal['metric', 'preference'] = 'metric', auto_deploy: bool = False, project: str | None = None) -> AbCampaignCreateData
Creates a new A/B test in the client’s project.
parameters
  • ab_test_key: A unique key to identify the AB test.
  • feedback_key: The feedback key against which the AB test will run.
  • models: The models to include in the AB test; they must be attached to the project.
  • traffic_split: Percentage of production traffic to route to AB test. traffic_split*100 % of inference requests for the project will be sent randomly to one of the models included in the AB test.
  • feedback_type: What type of feedback to run the AB test on, metric or preference.
  • auto_deploy: If set to True, when the AB test is completed, the winning model automatically gets promoted to the project default model.

get

get(key: str) -> DescribeAbCampaignAbCampaign | None
Get the details of an AB test.
parameters
  • key: The AB test key.

list

list(active: bool | None = None, status: Literal['warmup', 'in_progress', 'done', 'cancelled'] | None = None, project: str | None = None) -> Sequence[AbCampaignDetailData]
List the project AB tests.
parameters
  • active: Filter on active or inactive AB tests.
  • status: Filter on one of the possible AB test status.
  • project: Project key. Falls back to client’s default if not provided.

Artifacts

Resource to interact with job artifacts. Access via adaptive.artifacts

download

download(artifact_id: str, destination_path: str) -> None
Download an artifact file to a local path.
parameters
  • artifact_id: The UUID of the artifact to download.
  • destination_path: Local file path where the artifact will be saved.

Chat

Access via adaptive.chat

create

create(messages: List[ChatMessage], stream: bool | None = None, model: str | None = None, stop: List[str] | None = None, max_tokens: int | None = None, temperature: float | None = None, top_p: float | None = None, stream_include_usage: bool | None = None, session_id: str | UUID | None = None, project: str | None = None, user: str | UUID | None = None, ab_campaign: str | None = None, n: int | None = None, labels: Dict[str, str] | None = None, store: bool | None = None) -> ChatResponse | Generator[ChatResponseChunk, None, None]
Create a chat completion.
parameters
  • messages: Input messages, each dict with keys role and content.
  • stream: If True, partial message deltas will be returned. If stream is over, chunk.choices will be None.
  • model: Target model key for inference. If None, the requests will be routed to the project’s default model.
  • stop: Sequences or where the API will stop generating further tokens.
  • max_tokens: Maximum # of tokens allowed to generate.
  • temperature: Sampling temperature.
  • top_p: Threshold for top-p sampling.
  • stream_include_usage: If set, an additional chunk will be streamed with the token usage statistics for the entire request.
  • session_id: Session ID to group related interactions.
  • project: Project key. Falls back to client’s default if not provided.
  • user: ID of user making request. If not None, will be logged as metadata for the request.
  • ab_campaign: AB test key. If set, request will be guaranteed to count towards AB test results, no matter the configured traffic_split.
  • n: Number of chat completions to generate for each input messages.
  • labels: Key-value pairs of interaction labels.
  • store: Whether to store the interaction for future reference. Stores by default.
Examples:
# streaming chat request
stream_response = client.chat.create(
    model="model_key", messages=[{"role": "user", "content": "Hello from SDK"}], stream=True
)

print("Streaming response: ", end="", flush=True)
for chunk in stream_response:
    if chunk.choices:
        content = chunk.choices[0].delta.content
        print(content, end="", flush=True)

Compute pools

Resource to interact with compute pools. Access via adaptive.compute_pools

list

list()
List all compute pools available in the system. Returns: A list of compute pool objects.

resize_inference_partition

resize_inference_partition(compute_pool_key: str, size: int) -> list[ResizeResult]
Resize the inference partitions of all harmony groups in a compute pool.

Recipes

Resource to interact with custom scripts. Access via adaptive.recipes

delete

delete(recipe_key: str, project: str | None = None) -> bool
Delete a recipe.
parameters
  • recipe_key: The key or ID of the recipe to delete.
  • project: Optional project key. Falls back to client’s default.
Returns: True if deletion was successful.

generate_sample_input

generate_sample_input(recipe_key: str, project: str | None = None) -> dict
Generate a sample input dictionary based on the recipe’s JSON schema.
parameters
  • recipe_key: The key or ID of the recipe.
  • project: Optional project key. Falls back to client’s default.
Returns: A sample input dictionary conforming to the recipe’s schema.

get

get(recipe_key: str, project: str | None = None) -> CustomRecipeData | None
Get details for a specific recipe.
parameters
  • recipe_key: The key or ID of the recipe.
  • project: Optional project key. Falls back to client’s default.
Returns: The recipe data if found, otherwise None.

list

list(project: str | None = None) -> Sequence[CustomRecipeData]
List all custom recipes for a project.
parameters
  • project: Optional project key. Falls back to client’s default.
Returns: A sequence of custom recipe data objects.

update

update(recipe_key: str, path: str | None = None, entrypoint: str | None = None, entrypoint_config: str | None = None, name: str | None = None, description: str | None = None, labels: Sequence[tuple[str, str]] | None = None, project: str | None = None) -> CustomRecipeData
Update an existing recipe.
parameters
  • recipe_key: The key of the recipe to update.
  • path: Optional new path to a Python file or directory to replace recipe code. If None, only metadata (name, description, labels) is updated.
  • entrypoint: Optional path to the recipe entrypoint file, relative to the path directory. Only applicable when path is a directory.
Raises ValueError if:
  • path is a single file (entrypoint not supported for single files)
  • path is a directory that already contains main.py
Raises FileNotFoundError if:
  • the specified entrypoint file doesn’t exist in the directory
If path is a directory and entrypoint is None:
  • The directory must contain a main.py file, or FileNotFoundError is raised
    • entrypoint_config: Optional path to a separate config file that specifies the InputConfig for the recipe entrypoint. Only applicable when path is a directory.
Raises ValueError if:
  • path is a single file (entrypoint_config not supported for single files)
  • path is a directory that already contains config.py
Raises FileNotFoundError if:
  • the specified entrypoint_config file doesn’t exist in the directory
If path is a directory and entrypoint_config is None:
  • If entrypoint is specified, the InputConfig should be included in it.
  • If entrypoint is not specified, main.py should contain the InputConfig, or a config.py file must be present.
    • name: Optional new display name.
    • description: Optional new description.
    • labels: Optional new key-value labels as tuples of (key, value).
    • project: Optional project key. Falls back to client’s default.
Returns: The updated recipe data.

upload

upload(path: str, recipe_key: str | None = None, entrypoint: str | None = None, entrypoint_config: str | None = None, name: str | None = None, description: str | None = None, labels: dict[str, str] | None = None, project: str | None = None) -> CustomRecipeData
Upload a recipe from either a single Python file or a directory (path).
parameters
  • path: Path to a Python file or directory containing the recipe.
  • recipe_key: Optional unique key for the recipe. If not provided, inferred from:
  • File name (without .py) if path is a file
  • “dir_name/entrypoint_name” if path is a directory and custom entrypoint is specified
  • Directory name if path is a directory and no custom entrypoint is specified
    • entrypoint: Optional path to the recipe entrypoint file, relative to the path directory. Only applicable when path is a directory.
Raises ValueError if:
  • path is a single file (entrypoint not supported for single files)
  • path is a directory that already contains main.py
Raises FileNotFoundError if:
  • the specified entrypoint file doesn’t exist in the directory
If path is a directory and entrypoint is None:
  • The directory must contain a main.py file, or FileNotFoundError is raised
    • entrypoint_config: Optional path to a separate config file that specifies the InputConfig for the recipe entrypoint. Only applicable when path is a directory.
Raises ValueError if:
  • path is a single file (entrypoint_config not supported for single files)
  • path is a directory that already contains config.py
Raises FileNotFoundError if:
  • the specified entrypoint_config file doesn’t exist in the directory
If path is a directory and entrypoint_config is None:
  • If entrypoint is specified, the InputConfig should be included in it.
  • If entrypoint is not specified, main.py should contain the InputConfig, or a config.py file must be present.
    • name: Optional display name for the recipe.
    • description: Optional description.
    • labels: Optional key-value labels.
    • project: Optional project identifier.

upsert

upsert(path: str, recipe_key: str | None = None, entrypoint: str | None = None, entrypoint_config: str | None = None, name: str | None = None, description: str | None = None, labels: dict[str, str] | None = None, project: str | None = None) -> CustomRecipeData
Upload a recipe if it doesn’t exist, or update it if it does.
parameters
  • path: Path to a Python file or directory containing the recipe.
  • recipe_key: Optional unique key for the recipe. If not provided, inferred from:
  • File name (without .py) if path is a file
  • “dir_name/entrypoint_name” if path is a directory and custom entrypoint is specified
  • Directory name if path is a directory and no custom entrypoint is specified
    • entrypoint: Optional path to the recipe entrypoint file, relative to the path directory. Only applicable when path is a directory.
Raises ValueError if:
  • path is a single file (entrypoint not supported for single files)
  • path is a directory that already contains main.py
Raises FileNotFoundError if:
  • the specified entrypoint file doesn’t exist in the directory
If path is a directory and entrypoint is None:
  • The directory must contain a main.py file, or FileNotFoundError is raised
    • entrypoint_config: Optional path to a separate config file that specifies the InputConfig for the recipe entrypoint. Only applicable when path is a directory.
Raises ValueError if:
  • path is a single file (entrypoint_config not supported for single files)
  • path is a directory that already contains config.py
Raises FileNotFoundError if:
  • the specified entrypoint_config file doesn’t exist in the directory
If path is a directory and entrypoint_config is None:
  • If entrypoint is specified, the InputConfig should be included in it.
  • If entrypoint is not specified, main.py should contain the InputConfig, or a config.py file must be present.
    • name: Optional display name for the recipe.
    • description: Optional description.
    • labels: Optional key-value labels.
    • project: Optional project identifier, falls back to client’s default if it is set.
Returns: The created or updated recipe data.

Datasets

Resource to interact with file datasets. Access via adaptive.datasets

delete

delete(key: str, project: str | None = None) -> bool
Delete dataset.

get

get(key: str, project: str | None = None) -> DatasetData | None
Get details for dataset.
parameters
  • key: Dataset key.

list

list(project: str | None = None) -> List[ListDatasetsDatasets]
List previously uploaded datasets.

upload

upload(file_path: str, dataset_key: str, name: str | None = None, project: str | None = None) -> DatasetData
Upload a dataset from a file. File must be jsonl, where each line should match supported structure.
parameters
  • file_path: Path to jsonl file.
  • dataset_key: New dataset key.
  • name: Optional name to render in UI; if None, defaults to same as dataset_key.

Embeddings

Resource to interact with embeddings. Access via adaptive.embeddings

create

create(input: str, model: str | None = None, encoding_format: Literal['Float', 'Base64'] = 'Float', project: str | None = None, user: str | UUID | None = None) -> EmbeddingsResponseList
Creates embeddings inference request.
parameters
  • input: Input text to embed.
  • model: Target model key for inference. If None, the requests will be routed to the project’s default model. Request will error if default model is not an embedding model.
  • encoding_format: Encoding format of response.
  • user: ID of user making the requests. If not None, will be logged as metadata for the request.

Graders

Resource to interact with grader definitions used to evaluate model completions. Access via adaptive.graders

delete

delete(grader_key: str, project: str | None = None) -> bool
Delete a grader. Returns True on success.

get

get(grader_key: str, project: str | None = None) -> GraderData | None
Retrieve a specific grader by ID or key.

list

list(project: str | None = None) -> Sequence[GraderData]
List all graders for the given project.

lock

lock(grader_key: str, locked: bool, project: str | None = None) -> GraderData
Lock or unlock a grader.
parameters
  • grader_key: ID or key of the grader.
  • locked: Whether to lock (True) or unlock (False) the grader.
  • project: Explicit project key. Falls back to client.default_project.

test_external_endpoint

test_external_endpoint(url: str) -> TestRemoteEnvTestRemoteEnvRemoteEnvTestOnline | TestRemoteEnvTestRemoteEnvRemoteEnvTestOffline
Test external endpoint to check if it is reachable from Adaptive and returns a valid response.

Integrations

Resource to manage integrations and notification subscriptions. Access via adaptive.integrations

create

create(team: str, name: str, provider: Literal['slack', 'smtp', 'webhook', 'github'], connection: ConnectionConfigInputSlack | ConnectionConfigInputSmtp | ConnectionConfigInputWebhook | ConnectionConfigInputGitHub, subscriptions: Optional[List[SubscriptionInput]] = None, delivery_policy: Literal['multishot', 'singleshot'] | None = None) -> CreateIntegrationCreateIntegration
Create a new integration.
parameters
  • team: Team ID or key.
  • name: Human-readable name for the integration.
  • provider: Provider name.
  • connection: Connection config. Use one of:
  • ConnectionConfigInputSlack(webhook_url=..., bot_token=...)
  • ConnectionConfigInputSmtp(host=..., port=..., username=..., password=..., from_email=..., to_emails=[...])
  • ConnectionConfigInputWebhook(url=..., method=..., headers=...)
  • ConnectionConfigInputGitHub(api_token=..., org=..., repo=...)
    • subscriptions: Optional list of SubscriptionInput notification subscriptions.
    • delivery_policy: Delivery policy, either "multishot" or "singleshot".
Returns: The created integration data.

delete

delete(id: str) -> bool
Delete an integration.
parameters
  • id: Integration UUID.
Returns: True if the integration was deleted.

get

get(id: str) -> Optional[DescribeIntegrationIntegration]
Get a specific integration by ID.
parameters
  • id: Integration UUID.
Returns: The integration data, or None if not found.

get_provider

get_provider(name: str) -> Optional[DescribeProviderProvider]
Get a specific provider by name.
parameters
  • name: Provider name.
Returns: The provider data, or None if not found.

list

list(team: str) -> List[ListIntegrationsIntegrations]
List integrations for a team.
parameters
  • team: Team ID or key.
Returns: A list of integration objects.

list_providers

list_providers() -> List[ListProvidersProviders]
List available integration providers. Returns: A list of provider objects.

test_notification

test_notification(topic: str, payload: NotificationPayload, scope_user: Optional[List[str]] = None, scope_team: Optional[str] = None, scope_organization: bool = False, scope_admin: bool = False) -> TestNotificationTestNotification
Test notification delivery.
parameters
  • topic: Notification topic string.
  • payload: Notification payload, e.g. NotificationPayload(job_update=JobUpdatePayload(...)).
  • scope_user: List of user UUIDs to scope the notification to.
  • scope_team: Team ID or key to scope the notification to.
  • scope_organization: If True, scope the notification to the organization.
  • scope_admin: If True, scope the notification to admins.
Returns: The test notification result with eventId and message.

update

update(id: str, name: Optional[str] = None, enabled: Optional[bool] = None, connection: ConnectionConfigInputSlack | ConnectionConfigInputSmtp | ConnectionConfigInputWebhook | ConnectionConfigInputGitHub | None = None, subscriptions: Optional[List[SubscriptionInput]] = None, delivery_policy: Literal['multishot', 'singleshot'] | None = None) -> UpdateIntegrationUpdateIntegration
Update an existing integration.
parameters
  • id: Integration UUID.
  • name: New name for the integration.
  • enabled: Enable or disable the integration.
  • connection: Updated connection config. See create() for the available types.
  • subscriptions: Updated list of SubscriptionInput notification subscriptions.
  • delivery_policy: Updated delivery policy, either "multishot" or "singleshot".
Returns: The updated integration data.

Jobs

Resource to interact with jobs. Access via adaptive.jobs

cancel

cancel(job_id: str) -> JobDataPlus
Cancel a running job.
parameters
  • job_id: The ID of the job to cancel.
Returns: The updated job data after cancellation.

get

get(job_id: str) -> JobDataPlus | None
Get the details of a specific job.
parameters
  • job_id: The ID of the job to retrieve.
Returns: The job data if found, otherwise None.

list

list(first: int | None = 100, last: int | None = None, after: str | None = None, before: str | None = None, kind: list[Literal['TRAINING', 'EVALUATION', 'DATASET_GENERATION', 'MODEL_CONVERSION', 'CUSTOM']] | None = None, project: str | None = None) -> ListJobsJobs
List jobs with pagination and filtering options.
parameters
  • first: Number of jobs to return from the beginning.
  • last: Number of jobs to return from the end.
  • after: Cursor for forward pagination.
  • before: Cursor for backward pagination.
  • kind: Filter by job types.
  • project: Filter by project key.
Returns: A paginated list of jobs.

run

run(recipe_key: str, num_gpus: int, args: dict[str, Any] | None = None, name: str | None = None, project: str | None = None, compute_pool: str | None = None) -> JobDataPlus
Run a job using a specified recipe.
parameters
  • recipe_key: The key of the recipe to run.
  • num_gpus: Number of GPUs to allocate for the job.
  • args: Optional arguments to pass to the recipe; must match the recipe schema.
  • name: Optional human-readable name for the job.
  • project: Project key for the job.
  • compute_pool: Optional compute pool key to run the job on.
Returns: The created job data.

Feedback

Resource to interact with and log feedback. Access via adaptive.feedback

get_key

get_key(project: str, feedback_key: str) -> MetricData | None
Get the details of a feedback key.
parameters
  • feedback_key: The feedback key.
return self._gql_client.describe_metric(input=feedback_key).metric

list_keys

list_keys() -> Sequence[MetricDataAdmin]
List all feedback keys.

log_metric

log_metric(value: bool | float | int, completion_id: str | UUID, feedback_key: str, user: str | UUID | None = None, details: str | None = None) -> FeedbackOutput
Log metric feedback for a single completion, which can be a float, int or bool depending on the kind of feedback_key it is logged against.
parameters
  • value: The feedback values.
  • completion_id: The completion_id to attach the feedback to.
  • feedback_key: The feedback key to log against.
  • user: ID of user submitting feedback. If not None, will be logged as metadata for the request.
  • details: Textual details for the feedback. Can be used to provide further context on the feedback value.

log_preference

log_preference(feedback_key: str, preferred_completion: str | UUID | ComparisonCompletion, other_completion: str | UUID | ComparisonCompletion, user: str | UUID | None = None, messages: List[Dict[str, str]] | None = None, tied: Literal['good', 'bad'] | None = None, project: str | None = None) -> ComparisonOutput
Log preference feedback between 2 completions.
parameters
  • feedback_key: The feedback key to log against.
  • preferred_completion: Can be a completion_id or a dict with keys model and text, corresponding the a valid model key and its attributed completion.
  • other_completion: Can be a completion_id or a dict with keys model and text, corresponding the a valid model key and its attributed completion.
  • user: ID of user submitting feedback.
  • messages: Input chat messages, each dict with keys role and content. Ignored if preferred_ and other_completion are completion_ids.
  • tied: Indicator if both completions tied as equally bad or equally good.

register_key

register_key(project: str, key: str, kind: Literal['scalar', 'bool'], scoring_type: Literal['higher_is_better', 'lower_is_better'] = 'higher_is_better', name: str | None = None, description: str | None = None) -> MetricData
Register a new feedback key. Feedback can be logged against this key once it is created.
parameters
  • key: Feedback key.
  • kind: Feedback kind. If "bool", you can log values 0, 1, True or False only. If "scalar", you can log any integer or float value.
  • scoring_type: Indication of what good means for this feedback key; a higher numeric value (or True) , or a lower numeric value (or False).
  • name: Human-readable feedback name that will render in the UI. If None, will be the same as key.
  • description: Description of intended purpose or nuances of feedback. Will render in the UI.

Interactions

Resource to interact with interactions. Access via adaptive.interactions

create

create(messages: List[ChatMessage], completion: str, model: str | None = None, feedbacks: List[InteractionFeedbackDict] | None = None, user: str | UUID | None = None, session_id: str | UUID | None = None, project: str | None = None, ab_campaign: str | None = None, labels: Dict[str, str] | None = None, created_at: str | None = None) -> AddInteractionsResponse
Create/log an interaction.
parameters
  • messages: Input chat messages, each dict should have keys role and content.
  • completion: Model completion.
  • model: Model key.
  • feedbacks: List of feedbacks, each dict should with keys feedback_key, value and optional(details).
  • user: ID of user making the request. If not None, will be logged as metadata for the interaction.
  • session_id: Session ID to group related interactions.
  • project: Project key. Falls back to client’s default if not provided.
  • ab_campaign: AB test key. If set, provided feedbacks will count towards AB test results.
  • labels: Key-value pairs of interaction labels.
  • created_at: Timestamp of interaction creation or ingestion.

get

get(completion_id: str, project: str | None = None) -> CompletionData | None
Get the details for one specific interaction.
parameters
  • completion_id: The ID of the completion.

list

list(order: List[Order] | None = None, filters: ListCompletionsFilterInput | None = None, page: CursorPageInput | None = None, group_by: Literal['model', 'prompt'] | None = None, project: str | None = None) -> ListInteractionsCompletions | ListGroupedInteractionsCompletionsGrouped
List interactions in client’s project.
parameters
  • order: Ordering of results.
  • filters: List filters.
  • page: Paging config.
  • group_by: Retrieve interactions grouped by selected dimension.

Models

Resource to interact with models. Access via adaptive.models

add_external

add_external(name: str, external_model_id: str, api_key: str, provider: Literal['open_ai', 'google', 'azure'], endpoint: str | None = None, extra_params: dict[str, Any] | None = None) -> ModelData
Add proprietary external model to Adaptive model registry.
parameters
  • name: Adaptive name for the new model.
  • external_model_id: Should match the model id publicly shared by the model provider.
  • api_key: API Key for authentication against external model provider.
  • provider: External proprietary model provider.
  • extra_params: Additional provider-specific parameters (supported for open_ai and azure).

add_hf_model

add_hf_model(hf_model_id: SupportedHFModels, output_model_name: str, output_model_key: str, hf_token: str, compute_pool: str | None = None) -> JobData
Add model from the HuggingFace Model hub to Adaptive model registry. It will take several minutes for the model to be downloaded and converted to Adaptive format.
parameters
  • hf_model_id: The ID of the selected model repo on HuggingFace Model Hub.
  • output_model_key: The key that will identify the new model in Adaptive.
  • hf_token: Your HuggingFace Token, needed to validate access to gated/restricted model.

add_to_project

add_to_project(model: str, project: str | None = None) -> bool
Attach a model to the client’s project.
parameters
  • model: Model key.
  • project: Project key. Falls back to client’s default if not provided.
Returns: True if the model was successfully attached.

attach

attach(model: str, wait: bool = False, make_default: bool = False, project: str | None = None, placement: ModelPlacementInput | None = None, num_draft_steps: int | None = None) -> ModelServiceData
Attach a model to the client’s project.
parameters
  • model: Model key.
  • wait: If the model is not deployed already, attaching it to the project will automatically deploy it. If True, this call blocks until model is Online.
  • make_default: Make the model the project’s default on attachment.
  • num_draft_steps: Optional number of speculative decoding draft steps.

deploy

deploy(model: str, wait: bool = False, make_default: bool = False, project: str | None = None, placement: ModelPlacementInput | None = None, num_draft_steps: int | None = None) -> ModelServiceData
Deploy a model for inference in the specified project.
parameters
  • model: Model key.
  • wait: If True, block until the model is online.
  • make_default: Make the model the project’s default after deployment.
  • project: Project key.
  • placement: Optional placement configuration for the model.
  • num_draft_steps: Optional number of speculative decoding draft steps.
Returns: The model service data after deployment.

detach

detach(model: str, project: str) -> bool
Detach model from client’s project.
parameters
  • model: Model key.

get

get(model) -> ModelData | None
Get the details for a model.
parameters
  • model: Model key.

list

list(filter: ModelFilter | None = None) -> Sequence[ListModelsModels]
List all models in Adaptive model registry.

terminate

terminate(model: str, force: bool = False) -> str
Terminate model, removing it from memory and making it unavailable to all projects.
parameters
  • model: Model key.
  • force: If model is attached to several projects, force must equal True in order for the model to be terminated.

update

update(model: str, is_default: bool | None = None, desired_online: bool | None = None, project: str | None = None, placement: ModelPlacementInput | None = None, num_draft_steps: int | None = None) -> ModelServiceData
Update config of model attached to client’s project.
parameters
  • model: Model key.
  • is_default: Change the selection of the model as default for the project. True to promote to default, False to demote from default. If None, no changes are applied.
  • attached: Whether model should be attached or detached to/from project. If None, no changes are applied.
  • desired_online: Turn model inference on or off for the client project. This does not influence the global status of the model, it is project-bounded. If None, no changes are applied.
  • num_draft_steps: Optional number of speculative decoding draft steps.

update_compute_config

update_compute_config(model: str, compute_config: ModelComputeConfigInput) -> ModelData
Update compute config of model.

Permissions

Resource to list permissions. Access via adaptive.permissions

list

list() -> List[str]
List all available permissions in the system. Returns: A list of permission identifiers.

Roles

Resource to manage roles. Access via adaptive.roles

create

create(key: str, permissions: List[str], name: str | None = None) -> CreateRoleCreateRole
Creates new role.
parameters
  • key: Role key.
  • permissions: List of permission identifiers such as project:read. You can list all possible permissions with client.permissions.list().
  • name: Role name; if not provided, defaults to key.

list

list() -> List[ListRolesRoles]
List all roles. Returns: A list of role objects.

Teams

Resource to manage teams. Access via adaptive.teams

create

create(key: str, name: str | None = None) -> CreateTeamCreateTeam
Create a new team.
parameters
  • key: Unique key for the team.
  • name: Human-readable team name. If not provided, defaults to key.
Returns: The created team data.

list

list() -> List[ListTeamsTeams]
List all teams. Returns: A list of team objects.

Projects

Resource to interact with projects. Access via adaptive.projects

create

create(key: str, name: str | None = None, description: str | None = None, team: str | None = None) -> ProjectData
Create new project.
parameters
  • key: Project key.
  • name: Human-readable project name which will be rendered in the UI. If not set, will be the same as key.
  • description: Description of model which will be rendered in the UI.

get

get(project: str | None = None) -> ProjectData | None
Get details for the client’s project.

list

list() -> Sequence[ProjectData]
List all projects.

share

share(project: str, team: str, role: str, is_owner: bool = False) -> ProjectData | None
Share project with another team. Requires project:share permissions on the target project.
parameters
  • project: Project key.
  • team: Team key.
  • role: Role key.

unshare

unshare(project: str, team: str) -> ProjectData | None
Remove project access for a team. Requires project:share permissions on the target project.
parameters
  • project: Project key.
  • team: Team key.

Users

Resource to manage users and permissions. Access via adaptive.users

add_to_team

add_to_team(email: str, team: str, role: str) -> UpdateUserSetTeamMember
Update team and role for user.
parameters
  • email: User email.
  • team: Key of team to which user will be added to.
  • role: Assigned role

create

create(email: str, name: str, teams_with_role: Sequence[tuple[str, str]]) -> UserData
Create a user with preset teams and roles.
parameters
  • email: User’s email address.
  • name: User’s display name.
  • teams_with_role: Sequence of (team_key, role_key) tuples assigning the user to teams with specific roles.
Returns: The created user data.

create_service_account

create_service_account(name: str, teams_with_role: Sequence[tuple[str, str]]) -> ServiceAccountCreateResult
Create a service account (system user) with an API key. Service accounts authenticate via API keys only (no OIDC login). The API key is returned once and cannot be retrieved later.
parameters
  • name: Account name. Must contain only lowercase letters (a-z), numbers, hyphens, and underscores.
  • teams_with_role: Sequence of (team_key, role_key) tuples.
Returns: ServiceAccountCreateResult with user_id, email, and api_key.

delete

delete(email: str)
Delete a user from the system.
parameters
  • email: The email address of the user to delete.

list

list() -> Sequence[UserData]
List all users registered to Adaptive deployment.

me

me() -> UserData | None
Get details of current user.

remove_from_team

remove_from_team(email: str, team: str) -> UserData
Remove user from team.
parameters
  • email: User email.
  • team: Key of team to remove user from.

Types

ChatMessage

FieldType
roleRequired[Literal['system', 'user', 'assistant']]
contentRequired[str]

ComparisonCompletion

FieldType
textRequired[str]
modelRequired[str]

CompletionComparisonFilterInput

FieldType
metricRequired[str]

CompletionFeedbackFilterInput

FieldType
metricRequired[str]
valueNotRequired[NumericCondition]
reasonsNotRequired[List[str]]
userNotRequired[Any]

CompletionLabelFilter

FieldType
keyRequired[str]
valueNotRequired[List[str]]

CursorPageInput

FieldType
firstNotRequired[int]
afterNotRequired[str]
beforeNotRequired[str]
lastNotRequired[int]

InteractionFeedbackDict

FieldType
feedback_keyRequired[str]
value`Required[intfloatbool]`
detailsNotRequired[str]

JudgeExampleInput

FieldType
inputRequired[List[ChatMessage]]
outputRequired[str]
passesRequired[bool]
reasoningRequired[str]

ListCompletionsFilterInput

FieldType
modelsNotRequired[List[str]]
timerangeNotRequired['TimeRange']
session_idNotRequired[Any]
user_idNotRequired[Any]
feedbacksNotRequired[List['CompletionFeedbackFilterInput']]
comparisonsNotRequired[List['CompletionComparisonFilterInput']]
labelsNotRequired[List['CompletionLabelFilter']]
prompt_hashNotRequired[str]
completion_idNotRequired[Any]
sourceNotRequired[List[CompletionSource]]

ModelComputeConfigInput

FieldType
tpNotRequired[int]
kv_cache_lenNotRequired[int]
max_seq_lenNotRequired[int]

ModelFilter

FieldType
in_storageNotRequired[bool]
availableNotRequired[bool]
trainableNotRequired[bool]
kindNotRequired[List[Literal['Embedding', 'Generation']]]
view_allNotRequired[bool]
onlineNotRequired[List[Literal['ONLINE', 'OFFLINE', 'PENDING', 'ERROR']]]

ModelPlacementInput

FieldType
compute_poolsRequired[List[str]]
max_ttft_msNotRequired[int]

NumericCondition

FieldType
eqNotRequired[float]
neqNotRequired[float]
gtNotRequired[float]
gteNotRequired[float]
ltNotRequired[float]
lteNotRequired[float]

Order

FieldType
fieldRequired[str]
orderRequired[Literal['ASC', 'DESC']]

TimeRange

FieldType
from_`Required[intstr]`
to`Required[intstr]`