Skip to main content
Use a generative prompt node to generate AI-driven content using large language models (LLMs). To configure a generative prompt node in your agentic workflow, call the prompt() method. In this method, define the following parameters:
ParameterTypeRequiredDescription
namestringYesUnique identifier for the node.
display_namestringNoDisplay name for the node.
system_promptstring or list[string]NoInitial instructions that guide the LLM’s behavior.
It also supports expressions. For example: flow.input.variable_1.
user_promptstring or list[string]NoThe specific request or task you want the LLM to perform.
It also supports expressions. For example: flow.input.variable_1.
prompt_exampleslist[PromptExample]NoExamples of user prompts. You can configure:
  • input: The example input prompt.
  • expected_output: The expected output for the given input.
  • enabled: A boolean value to enable or disable the example.
llmstringNoLLM used for content generation.
llm_parametersPromptLLMParametersNoParameters for the LLM. You can configure:
  • temperature: Controls randomness. Higher values produce more diverse outputs.
  • min_new_tokens: Sets the minimum number of tokens to generate.
  • max_new_tokens: Sets the maximum number of tokens to generate.
  • top_k: Limits token selection to the top k most likely options.
  • top_p: Uses nucleus sampling to select from the top p cumulative probability.
  • stop_sequences: Defines sequences that stop generation when encountered.
descriptionstringNoDescription of the node.
input_schematype[BaseModel]NoInput schema for the LLM.
output_schematype[BaseModel]NoOutput schema for the LLM.
input_mapDataMapNoDefine input mappings using a structured collection of Assignment objects.
error_handler_configNodeErrorHandlerConfiNoDefines the configuration for the retry option using a JSON structure. In this JSON, set the following fields:
  • error_message: an optional string that describes the retry error.
  • max_retries: an optional integer that limits how many times the node retries.
  • retry_interval: an optional integer that sets the interval between retries in milliseconds.
The following example shows how to configure a generative prompt node in a Python function and instantiate it in a agentic workflow:
Python
'''
Build a simple hello world agentic workflow that will combine the result of two tools.
'''

from datetime import datetime
from typing import Optional
from pydantic import BaseModel, Field
from ibm_watsonx_orchestrate.flow_builder.flows import END, Flow, flow, START, PromptNode

from .email_helpdesk import email_helpdesk

class Message(BaseModel):
    """
    This class represents the content of a support request message.

    Attributes:
        message (str): support request message
    """
    message: str
    requester_name: Optional[str] = Field(default=None, description="Name of the support requestor.")
    requester_email: Optional[str] = Field(default=None, description="Email address of the support requestor.")
    received_on: Optional[str|datetime] = Field(default=None, description="The date when the support message was received.")

class SupportInformation(BaseModel):
    requester_name: str | None = Field(description="Name of the support requestor.")
    requester_email: str | None = Field(description="Email address of the support requestor.")
    summary: str = Field(description="A high level summary of the support issue.")
    details: str = Field(description="Original text of the support request.")
    order_number: str | None = Field(description="The order number.")
    received_on: datetime | None = Field(description="The date when the support message was received.")

def build_prompt_node(aflow: Flow) -> PromptNode:
    prompt_node = aflow.prompt(
        name="extract_support_info",
        display_name="Extract information from a support request message.",
        description="Extract information from a support request message.",
        system_prompt=[
            "You are a customer support processing assistant, your job take the supplied support request received from email,",
            "and extract the information in the output as specified in the schema."
        ],
        user_prompt=[
            "Here is the {message}"
        ],
        llm="meta-llama/llama-3-3-70b-instruct",
        llm_parameters={    
            "temperature": 0,
            "min_new_tokens": 5,
            "max_new_tokens": 400,
            "top_k": 1,
            "stop_sequences": ["Human:", "AI:"]
        },
        error_handler_config={
            "error_message": "An error has occured while invoking the LLM",
            "max_retries": 1,
            "retry_interval": 1000
        },
        input_schema=Message,
        output_schema=SupportInformation
    )
    return prompt_node

@flow(
        name = "extract_support_info",
        input_schema=Message,
        output_schema=SupportInformation
    )
def build_extract_support_info(aflow: Flow = None) -> Flow:
    """
    Creates a agentic workflow that will use the Prompt node to extract information from a support
    message, and forward the summary to the helpdesk.
    This agentic workflow will rely on the agentic workflow engine to perform automatic data mapping at runtime.

    Args:
        flow (Flow, optional): During deployment of the agentic workflow model, it will be passed a agentic workflow instance.

    Returns:
        Flow: The created agentic workflow.
    """
    email_helpdesk_node = aflow.tool(email_helpdesk)
    prompt_node = build_prompt_node(aflow)

    aflow.sequence(START, prompt_node, email_helpdesk_node, END)

    return aflow