curl --request PATCH \
--url https://{api_endpoint}/api/v1/tenants/{tenant_id} \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"title": "<string>",
"description": "<string>",
"tags": [
"<string>"
],
"settings": {
"model_settings": {
"base_llm": {
"model_name": "<string>",
"model_id": "<string>",
"model_params": {},
"guardrails": {
"hap": {
"input": {
"enabled": false,
"threshold": 0.5
},
"output": {
"enabled": false,
"threshold": 0.5
},
"mask": {
"remove_entity_value": false
}
},
"social_bias": {
"input": {
"enabled": false,
"threshold": 0.5
},
"output": {
"enabled": false,
"threshold": 0.5
},
"mask": {
"remove_entity_value": false
}
},
"pii": {
"input": {
"enabled": false,
"threshold": 0.5
},
"output": {
"enabled": false,
"threshold": 0.5
},
"mask": {
"remove_entity_value": false
}
}
},
"system_prompt": "<string>",
"prompt_templates": {
"chat_template": "<string>",
"chat_template_params": {}
}
},
"embeddings": {
"model_name": "<string>",
"model_id": "<string>"
},
"is_base_llm_enabled": true
},
"router_settings": {
"router_type": "unified",
"model_name": "<string>",
"model_id": "<string>",
"model_params": {},
"routing_prompt": "<string>",
"router_config": {
"continue_journey_on_none": true,
"confidence_threshold": 123,
"confidence_method": "perplexity"
}
},
"user_settings": {
"confirm_routing": true,
"clear_watsonx_assistant_context": false,
"clientside_shortlisting": false
},
"slot_filling_settings": {
"slot_filler_type": "unified",
"model_name": "<string>",
"model_id": "<string>",
"model_params": {}
}
}
}
'{
"detail": [
{
"loc": [
"<string>"
],
"msg": "<string>",
"type": "<string>"
}
]
}curl --request PATCH \
--url https://{api_endpoint}/api/v1/tenants/{tenant_id} \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"title": "<string>",
"description": "<string>",
"tags": [
"<string>"
],
"settings": {
"model_settings": {
"base_llm": {
"model_name": "<string>",
"model_id": "<string>",
"model_params": {},
"guardrails": {
"hap": {
"input": {
"enabled": false,
"threshold": 0.5
},
"output": {
"enabled": false,
"threshold": 0.5
},
"mask": {
"remove_entity_value": false
}
},
"social_bias": {
"input": {
"enabled": false,
"threshold": 0.5
},
"output": {
"enabled": false,
"threshold": 0.5
},
"mask": {
"remove_entity_value": false
}
},
"pii": {
"input": {
"enabled": false,
"threshold": 0.5
},
"output": {
"enabled": false,
"threshold": 0.5
},
"mask": {
"remove_entity_value": false
}
}
},
"system_prompt": "<string>",
"prompt_templates": {
"chat_template": "<string>",
"chat_template_params": {}
}
},
"embeddings": {
"model_name": "<string>",
"model_id": "<string>"
},
"is_base_llm_enabled": true
},
"router_settings": {
"router_type": "unified",
"model_name": "<string>",
"model_id": "<string>",
"model_params": {},
"routing_prompt": "<string>",
"router_config": {
"continue_journey_on_none": true,
"confidence_threshold": 123,
"confidence_method": "perplexity"
}
},
"user_settings": {
"confirm_routing": true,
"clear_watsonx_assistant_context": false,
"clientside_shortlisting": false
},
"slot_filling_settings": {
"slot_filler_type": "unified",
"model_name": "<string>",
"model_id": "<string>",
"model_params": {}
}
}
}
'{
"detail": [
{
"loc": [
"<string>"
],
"msg": "<string>",
"type": "<string>"
}
]
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Show child attributes
Model settings used to define the base LLM and other base models used by Orchestrate.
Show child attributes
The base LLM model used by Orchestrate. By default, Orchestrate will use the system configured base LLM.
Show child attributes
Name of the model to use as the base LLM. Must be a valid model name available on watsonx.ai.
The ID of a custom configured model to use as the base LLM model used by Orchestrate.
Default models params to use.
Default AI safety guardrails settings.
Show child attributes
Show child attributes
Show child attributes
Show child attributes
Override the system prompt used by the Orchestrate base LLM.
LLM specific prompt templates using Jinja2 format.
Show child attributes
The base embeddings model used by Orchestrate. By default, Orchestrate will use the system configured base embeddings model.
The is_base_llm_enabled flag is used to switch the route between LLM`s. By default the value for the flag will be False
Configuration of the Router used by Orchestrate.
Show child attributes
Name of the model to use for routing. Must be a valid model name available on watsonx.ai.
The ID of a custom configured model to use for routing.
Default models params to use.
Override the routing prompt used by the Orchestrate routing LLM.
Additonal router specific configuration properties.
Default user settings.
Show child attributes
Should Orchestrate confirm routing decisions explicity with the user. Enabled by default.
If set to True AND the last turn event was from a Dialog Assistant where branch_exited was set to True and branch_completion_reason is completed, then the prior context is cleared
if clientside shortlisting is enabled, we would expect custom_routing_table in the message. Default to False.
Configuration of the Slot filler used by Orchestrate.
Show child attributes
Name of the model to use for slot filling. Must be a valid model name available on watsonx.ai.
The ID of a custom configured model to use for slot filling.
Default models params to use.
Successful Response