Skip to main content
Performance Testing ApproachThis guide focuses on how to measure and optimize Tool performance. Performance varies significantly based on workload, configuration, system load, and network conditions. Always measure in your own environment.
Note: “Flow” in this document refers to wxO Agentic Workflow

Overview

Tools are executable units in watsonx Orchestrate that perform specific tasks (API calls, data processing, calculations) within isolated, secure environments. They are invoked by Agents or orchestrated by Flows.

Tool Types in wxO

Choosing the Right Tool Type

wxO Flow (first choice): Native support for connections/security, Python code snippets, orchestration of wxO agents, tools and people (e.g. confirmation before transaction, custom forms), built-in document extraction and processing, LLM support. Python Tools: For custom libraries not in wxO Flow code blocks Langflow Tools: For Langflow-specific AI components or existing langflow flows

Tool Type Comparison

Tool TypeRuntime/BaseTimeoutKey FeaturesBest For
wxO FlowVisual workflowNoneStateful, resumable, orchestrates Agents/Tools/People, LLM supportMulti-step workflows, API integrations, human-in-loop, simple document processing
PythonPython 3.122 minStateless, read-only FS, outbound network onlyData processing, business logic, calculations
LangflowLangflow 1.7.12 minStateless, read-only FS, outbound network only, 2+ sec initializationLLM processing, RAG, complex document analysis
APIOpenAPI spec2 min (sync) / None (async)External REST APIs, network-dependentThird-party integrations, external services
MCPMCP protocol2 minExtended capabilities, varies by implementationCustom integrations, protocol-based interactions

Understanding Tool Performance

The Granularity Principle

Key Insight: Tool performance has two distinct components:
  1. Tool Call Overhead: The cost of invoking a tool
    • Includes initialization, context setup, and result handling
    • Exists for every tool invocation
    • Relatively consistent per tool type
  2. Execution Inside the Tool: The actual work being done
    • Runs fast once inside the tool
    • Varies based on the logic and operations
    • Where optimization efforts should focus
Implication:
  • Multiple small tool calls = Multiple overhead costs
  • Single larger tool call = One overhead cost, fast internal execution
  • Recommendation: Combine related operations into single tools when possible

Performance Characteristics by Tool Type

Python Tools:
  • Call overhead: Present but minimal
  • Internal execution: Very fast for most operations
  • Overall: Fast to very fast
  • Best for: Deterministic logic, data processing, API calls
Langflow Tools:
  • Call overhead: Includes Langflow initialization (2+ seconds due to initialization)
  • Internal execution: Depends on LLM operations
  • Overall: Slower due to LLM inference (if using LLM)
  • Best for: LLM reasoning, NLP tasks, document analysis

Python & Langflow Tool Performance

Shared Technical Constraints

Both Python and Langflow tools operate in secure sandboxes with identical constraints:
ConstraintImpactBest Practice
Isolated pod executionEach tool instance runs in a separate pod for tenant isolation and securityDesign for stateless, independent execution
2 CPU cores maximumLimited computational resources per tool instanceOptimize algorithms and avoid CPU-intensive operations
2GB memory maximumLimited memory per tool instanceManage memory efficiently, avoid large data structures
2-minute timeoutMaximum execution time per callDesign for timeout awareness, break long operations into chunks
No GPU accessGPU operations will failUse CPU-optimized algorithms and libraries only
Cold start penaltyFirst run takes longer; subsequent runs within 72 hours are fasterExpect initial latency; warm pods improve performance
Stateless executionNo state persists between callsUse external storage (Redis, S3, database)
Read-only filesystemCannot write files locallyUse in-memory buffers or external storage
Network isolationOutbound requests onlyDesign for outbound-only patterns
Security Note: Due to tenant isolation requirements and the potentially unsecured nature of user-authored code, wxO runs each Python and Langflow tool instance as a separate pod with strict resource limits. This ensures security and prevents resource contention between tenants.
Performance Note: Tool pods experience cold start latency on first invocation. Once warmed up, the pod remains available for approximately 6 hours, with the time extended upon continuous use of the tool. This provides faster execution for subsequent calls. Plan for initial latency in performance testing and user experience design.

Python Tools

Execution Speed: Very fast (simple operations), fast (data processing), variable (external API calls) Optimization Strategies:
  • Minimize and batch external API calls
  • Use efficient algorithms (O(n) vs O(n²))
  • Choose performant libraries
  • Implement external caching (Redis) for expensive operations

Langflow Tools

Execution Speed: Minimum 2+ seconds (initialization overhead per run), then variable depending on workflow complexity Key Consideration: Due to the initialization overhead on every run, Langflow tools are most effective for operations that take longer to execute (but remain under the 2-minute timeout). The initialization penalty becomes less significant when the actual workflow processing time is substantial.
Note: LLM operations in Langflow are optional. Use Langflow when you need its specific AI components or have existing Langflow workflows, not solely for LLM capabilities.
Optimization Strategies:
  • Use for longer-running operations where initialization overhead is proportionally smaller
  • Minimize and combine LLM calls into single prompts (when using LLMs)
  • Use concise, focused prompts with minimal context (when using LLMs)
  • Choose smaller models for simple tasks, larger for complex reasoning (when using LLMs)
  • Cache expensive operation results externally (Redis) for repeated queries

API & MCP Tool Performance

API Tools (OpenAPI-based)

Performance: Depends entirely on external service (network latency, service speed, payload size, rate limiting, authentication) Timeout:
  • Synchronous calls: 2 minutes (same as Python/Langflow/MCP)
  • Asynchronous calls: None (via OpenAPI callback syntax)

MCP Tools (Model Context Protocol)

Performance: Varies by implementation (tool design, external dependencies, protocol overhead, resource requirements) Timeout: 2 minutes (same as Python/Langflow/synchronous API tools; MCP protocol does not support async)

Summary

Key Points:
  • wxO Flow: First choice for most scenarios - supports connections, security, custom code, no timeout limit
  • Python Tools: For custom libraries not supported in wxO Flow code blocks
  • Langflow Tools: For Langflow-specific AI components
  • API Tools: For OpenAPI-based external service integrations - performance depends on external service
  • MCP Tools: For Model Context Protocol capabilities - performance varies by implementation
  • Tool call overhead exists: Combine operations when possible
  • 2-minute timeout: Applies to synchrous tools such as Python, Langflow, OpenAPI sync and MCP tools

Related Guides: