Skip to main content
You create and import native agents into watsonx Orchestrate. Use YAML, JSON, or Python files to build agents for watsonx Orchestrate.

Configuring native agents

Each native agent includes these components:
llm
string
Specifies the large language model (LLM) that powers the agent’s ability to understand and respond to queries. For more information, see Managing custom LLMs with the AI gateway.
style
string
Defines the prompting structure the agent uses. This structure determines how the LLM interprets and responds to instructions. For more information, see Choosing the agent style.
hide_reasoning
boolean
Defines whether the agent’s reasoning appears to the end user. When set to True, the agent hides its reasoning from the user. When set to False, the agent displays its reasoning. The default value is False.
instructions
string
Provides natural language guidance to the LLM. These instructions shape the agent’s behavior, such as adopting a specific persona like a customer service representative and explaining how to use tools and collaborators. For more information, see Writing instructions for agents.
tools
list<string>
Extends the LLM’s capabilities by enabling access to external functions and services. Examples include:
  • OpenAPI definitions for external APIs
  • Python functions for scripting more complex interactions with external systems
  • Agentic workflows for orchestrating operations across tools and agents
  • Toolkit-exposed tools such as an MCP server
collaborators
list<string>
Lists other agents this agent interacts with to solve complex problems. Collaborators can include native watsonx Orchestrate agents, external agents, or watsonx Assistants. For more information, see Connect to external agents.
description
string
Provides a human-readable summary of the agent’s purpose, visible in the Manage Agents UI. It also helps other agents understand its role when used as a collaborator. The description does not affect responses unless invoked as a collaborator. For more information, see Writing descriptions for agents.
knowledge_base
list<string>
Represents domain-specific knowledge that is acquired by the LLM from uploaded files or connected vector data stores. Agent Knowledge allows you to provide information that the agent should inherently know and use to answer questions. This knowledge can come from documents you upload or from various data sources integrated with watsonx Orchestrate, such as Milvus, Elasticsearch, AstraDB, and others. To learn more about setting up a knowledge base see the section on Knowledge bases.
restrictions
string
Specifies whether the Agent remains editable after import. This field accepts one of the following options:
  • editable Sets the Agent as editable. This is the default value.
  • non_editable Sets the Agent as non-editable and prevents it from being exported.
icon
string
An SVG-format string of an icon for the agent. The icon is used in the UI and in channels where the agent is connected to. It must follow these restrictions:
  • SVG format
  • Square shape
  • Width and height between 64 and 100
  • Maximum file size: 200 KB
spec_version: v1
kind: native
name: agent_name
llm: watsonx/ibm/granite-3-8b-instruct  # watsonx Orchestrate (watsonx) model provider followed by the model id: ibm/granite-3-8b-instruct
style: default     
hide_reasoning: False
description: |
    A description of what the agent should be used for when used as a collaborator.
instructions: |
    These instructions control the behavior of the agent and provide 
    context for how to use its tools and agents.
collaborators:
  - name_of_collaborator_agent_1
  - name_of_collaborator_agent_2
tools:
  - name_of_tool_1
  - name_of_tool_2
knowledge_base:
  - name_of_knowledge_base
restrictions: editable
icon: "<svg></svg>" # SVG icon string for the agent

Additional features of native agents

Customize your agent with extra features to match your needs.

Guidelines

Use guidelines to control agent behavior. Guidelines create predictable, rule-based responses. Apply them when you need consistent actions.
  • Define guidelines using the following format: When condition then perform an action and/or invoke a tool.
  • Include only guidelines relevant to the current user request in the agent prompt. This reduces complexity for the LLM.
  • Configure the guidelines in priority order. Guidelines execute it, based on their position in the list.
To configure guidelines, add a guidelines section and define each guideline with these fields:
guidelines
list<object>
A list of guidelines the agent should follow. Each guideline uses the format:
  • When condition then perform an action and/or invoke a tool. Provide at least one of action or tool.
spec_version: v1
kind: native # Optional, Default=native, Valid options ['native', 'external', 'assistant']
name: finance_agent
style: default # Optional, Valid options ['default', 'react', 'planner']
llm: watsonx/meta-llama/llama-3-2-90b-vision-instruct
description: |
  You are a helpful calculation agent that assists the user in performing math.
  This includes performing mathematical operations and providing practical use cases for math in everyday life.
instructions: |
  Always solve the mathematical equations using the correct order of operations (PEMDAS):
    Parentheses
    Exponents (including roots, powers, and so on)
    Multiplication and Division (from left to right)
    Addition and Subtraction (from left to right)

  Make sure to include decimal points when the user's input includes a float.
guidelines:
  - display_name: "User Dissatisfaction"
    condition: "The Customer expresses dissatisfaction with the agents response."
    action: "Acknowledge their frustration and ask for details about their experience so it can be addressed properly."
  - display_name: "Joy check"
    condition: "If the customer expresses joy or happiness about the response"
    action: "Respond by making chicken noises like 'bock bock' and then take no further action"
  - display_name: "Check user"
    condition: "If the customer expresses the need to check a user in the system."
    action: "Use the 'get_user' tool to check the user in the system"
    tool: "get_user"

tools:
  - get_user
collaborators: []

Web chat configuration

Adjust web chat settings to control how your agent behaves in the web chat UI. Set up a welcome message and starter prompts to guide users from the start.

Welcome message

Welcome message example
The welcome message is the first thing users see when they start interacting with your agent. Personalize this message to reflect your agent’s purpose. To configure it, define the welcome_content schema in your agent file:
welcome_content
object
Configures the welcome content.
Example:
spec_version: v1
style: react
name: service_now_agent
llm: watsonx/meta-llama/llama-3-2-90b-vision-instruct
description:  'Agent description'
instructions: ''
collaborators: []
tools: []
hidden: false
welcome_content:
  welcome_message: "Hello, I'm Agent. Welcome to watsonx Orchestrate!"
  description: "How can I help you today?"

Starter prompts

Start prompts example
Starter prompts are predefined messages that help users start a conversation with your agent. Configure these prompts in the starter_prompts section of your agent file. First, define whether these prompts are the default set. Then, for each prompt, specify these details:
starter_prompts
object
Configures starter prompts shown to the user.
Example:
spec_version: v1
style: react
name: service_now_agent
llm: watsonx/meta-llama/llama-3-2-90b-vision-instruct
description:  'Agent description'
instructions: ''
collaborators: []
tools: []
hidden: false
starter_prompts:
    prompts:
        - id: "hello1"
          title: "Hello1"
          subtitle: ""
          prompt: "This is the messages for Hello prompt"

Chat with documents

Enable Chat with Documents to let users upload a document during a conversation and ask questions about its content. The document is available only for the session and is not stored in watsonx Orchestrate.
Important:When you enable this feature, users can upload documents during chat interactions. The agent uses the file name to route prompts to the correct document. Make sure file names are unique and meaningful to avoid confusion.
chat_with_docs
object
Configures the Chat with Documents feature.
spec_version: v1
style: react
name: service_now_agent
llm: watsonx/meta-llama/llama-3-2-90b-vision-instruct
description:  'Agent description'
instructions: ''
collaborators: []
tools: []
chat_with_docs:
  enabled: true
  citations:
    citations_shown: -1
  generation:
    idk_message: 'Your I don’t know message'
  supports_full_document: true 

Providing access to context variables

Use context variables to inject user-specific identifiers—such as username, user ID, or tenant ID—from upstream systems into your agent. This creates personalized, context-aware interactions during execution. Your agent includes these variables by default:
  • wxo_email_id - The email address of the user who invoked the agent or tool
  • wxo_user_name - The username of the user who invoked the agent or tool
  • wxo_tenant_id - The unique tenant ID for the request
Important:Do not use hyphens (-) when adding variables. Hyphens cause errors. Use underscores (_) instead.
context_access_enabled
object
default:"false"
Specifies if the agent can access context variables set by the Runs API.
context_variables
list<string>
default:"[]"
Lists the context variables the agent can access.
kind: native
name: context_variables_sample
display_name: Context Variables Agent
description: I'm able to deal with context variables
llm: watsonx/meta-llama/llama-3-2-90b-vision-instruct
style: react
instructions: |
  You have access to additional information :
    - wxo_email_id - {wxo_email_id}  
    - wxo_user_name - {wxo_user_name}  
    - wxo_tenant_id - {wxo_tenant_id}  
  Help users by returning the values to you so you can respond back to user with these values.
guidelines: []
collaborators: []
tools: []
knowledge_base: []
spec_version: v1
context_access_enabled: true
context_variables:
  - wxo_email_id
  - wxo_tenant_id
  - wxo_user_name
Note:For scheduled workflows, the email address and username belong to the user who scheduled the workflow.
To set context variables, pass a dictionary as context in the request body instead of context_variables when you use these endpoints: By default, agents invoked during a run cannot access context variables. To enable access, set context_access_enabled to true. After enabling access, specify the context_variables your agent uses. These variables define which contextual details—such as user identifiers or session data—are available during execution. You can reference these variables in descriptions, guidelines, or instructions by using curly braces like {my_context_variable}. You can also pass them as arguments to a Python function. When you pass them to a function, the agent runtime fills the values automatically without asking the user.
spec_version: v1
style: react
name: service_now_agent
llm: watsonx/meta-llama/llama-3-1-70b-instruct
description:  'Agent description'
instructions: |
  You have access to clientID: {clientID}
collaborators: []
tools: []
context_access_enabled: true
context_variables:
  - clientID
  - channel     # Use it to get access to channel integration (embedded chat)