Why descriptions and instructions matter
Descriptions and instructions are not just metadata, they directly influence orchestration:- Supervisor agents rely on descriptions to route tasks to the right collaborator.
- Instructions guide the LLM’s reasoning, ensuring the agent uses its tools and collaborators effectively.
Naming agents and tools
The name uniquely identifies the agent in the system and UI. Assign clear and unique names to each agent and tool you create. These names should reflect their capabilities and functions, making it easier for users to identify them within watsonx Orchestrate. To write effective names:- Use
snake_caseinstead ofcamelCase. They usually work better with routing agents. Agent names must not include spaces or special characters. - Keep names short and descriptive.
- Avoid generic terms like “helper” or “assistant”.
- Use domain-specific language (for example
sales_outreach_agent).
- Good:
ibm_historical_knowledge_agent - Bad:
myAgent1
Writing instructions for agents
Instructions define how an agent behaves, including its tone, reasoning style, and decision-making process. They guide the underlying language model to produce consistent, predictable outputs and determine how the agent uses tools and collaborators.- They act as the agent’s persona and operating manual.
- They ensure the agent responds in a way that aligns with user expectations and organizational standards.
- They influence orchestration by determining when and how the agent invokes tools or other agents.
Best Practices for Writing Instructions
-
Use natural language
- Write instructions as clear, conversational directives.
- Avoid overly technical or ambiguous phrasing.
-
Example:
-
Define tone and style
- Specify whether responses should be professional, friendly, concise, etc.
-
Example:
-
Include tool usage rules
- Tell the agent when and how to use specific tools.
-
Example:
-
Set error-handling guidance
- Define what to do if required information is unavailable.
-
Example:
-
Avoid Overloading Instructions
- Keep them focused on behavior and decision-making, not on task-specific details.
-
Example:
Good
Bad.
The GPT-OSS model (groq/gpt-oss-120b)
When building an agent with the ADK, the agent model can influence how the agent behaves and interprets instructions.
groq/openai/gpt-oss-120b is highly capable but also sensitive to vague or conflicting instructions. To ensure reliable behavior:
- Be explicit: Clearly state priorities and constraints.
- Avoid ambiguity: Conflicting directives can lead to unpredictable outputs.
- Test iteratively: Validate instructions with sample tasks before deploying.
- Limit complexity: Break down multi-step reasoning into clear, sequential guidance.
groq/openai/gpt-oss-120b:
Note:
This model is a non-IBM product governed by a third-party license that may impose use restrictions and other obligations. By using this model you agree to the terms. Read the terms
This model is a non-IBM product governed by a third-party license that may impose use restrictions and other obligations. By using this model you agree to the terms. Read the terms
Special considerations
The defaultgroq/openai/gpt-oss-120b model does not use the standard system prompt from watsonx Orchestrate. As a result, it can behave differently from other models, including:
- Not explicitly identifying itself as part of the watsonx Orchestrate ecosystem.
- Preferring its internal knowledge over your connected knowledge bases, unless you instruct otherwise.
- Hyperlinks might not be formatted properly, unless you instruct otherwise.
- It does not follow the supported Agent styles.
-
Prioritize knowledge bases over internal knowledge
-
Cap reasoning depth and iteration count
-
Formatting
-
Keep chit-chat short
-
Enforce a strict output budget
-
Fail fast when data is missing
Combined instruction example (recommended template)
Use this as a compact block tailored forgroq/gpt-oss-120b:
Good vs. bad instruction snippets for this topic
Good (concise, enforce constraints):Writing descriptions for agents
The agent description is used by an agent to determine when and how to delegate a task to a collaborator agent, tool or knowledge ensuring the right request is sent to the right capability. When adding any artifact (a collaborator agent, tool, or knowledge) to an agent, the agent description is critical to the agent’s success. Agent descriptions complement the agent names by providing detailed information about their purpose, capabilities, and usage. A well-written description helps users understand the agent’s role and potential applications. Descriptions should not be written in isolation — they must reflect the agent’s scope, use cases, and interaction with collaborators, tools, and knowledge bases. This ensures reliable routing and prevents ambiguity.Key principles
- Descriptions as instructions: Treat the agent description like instructions to the agent on how it should use the artifact (collaborator agent, tool, or knowledge base) in question
- Hierarchy matters:
- Agent descriptions are broader and set the overall purpose.
- Collaborator descriptions narrow the scope to specific tasks.
- Tool descriptions are the most specific, clarifying exact functionality.
- Avoid overlap: Each agent should have a distinct scope. If scopes are similar, clearly differentiate them.
- Include context markers:
- Geographic scope (for example, “US”) helps route queries correctly.
- Domain-specific language (for example, “pet-parents and their fur-babies”) sets tone and aligns with instructions.
- Don’t overload descriptions: Mention core capabilities, not every tool. The agent should infer tool usage from context.
- Define trigger conditions: Clarify when the agent should use the artifact, such as “Use this when the user requests supplier risk evaluation or mentions creditworthiness.”
- Specify actions: Be specific about the actions the artifact should perform. When possible include how the artifact should performs those actions, such as “Access loan data from loan APIs, calculate affordability, and summarize affordability differences”
- Define restrictions: Note desired restrictions and limitations. Most artifacts will have a boundary they should stop at, such as “Do not give legal or tax advice, only provide estimates and data-driven comparisons”
- Use appropriate jargon: Define any industry specific terms of acronyms so they won’t be misunderstood.
Description examples
HR agent description
Supplier Analysis tool routing description
Finance agent routing description
Example: agent and collaborator description for a pet store
The following example demonstrates how to write clear, context-rich descriptions for an agent and its collaborators, tools, and knowledge bases. This example also shows how tone and language can align with the agent’s persona, which should be reinforced in the instructions for a consistent user experience. Agent DescriptionUSPetTreeAgent
MatchMakerAgent
SearchPetTreeCats: Search the PetTree registry for cats available for adoption.SearchPetTreeDogs: Search the PetTree registry for dogs available for adoption.GetPetTreeAnimalProfile: Get the full profile of an individual cat or dog.ViewAnimalPhotos: View pictures of cats and dogs available for adoption.FileAdoptionApplication: File a pet adoption application for a specific cat or dog.
- Cat Breeds Care: Facts, personalities, and care guides for different cat breeds.
- Dog Breeds Care: Facts, personalities, and care guides for different dog breeds.
Writing descriptions for tools
A good tool description helps agents identify and use the tool effectively. It should include a general overview of the tool’s purpose, as well as details about its inputs and outputs. In Python tools, descriptions are defined in docstrings, following Google-style docstrings. Example:TEXT
Designing agents and tools for best performance
When designing agents and tools, aim for balanced complexity. Components that are too simple may lack utility, while overly complex designs can reduce the model’s ability to reason effectively.Guidelines for Agents
- Agents using LLaMA-based LLMs perform best with 10 or fewer tools or collaborators.
- For complex use cases requiring many tools, break the problem into smaller subproblems and assign them to collaborator agents.
- This limit may vary for more powerful models.
Guidelines for Tools
- Keep input and output schemas as simple as possible.
- Avoid tools with:
- A large number of input parameters.
- Parameters with deeply nested or complex data types.
- Complex schemas make it harder for the LLM to use the tool effectively.

