Note: “Flow” in this document refers to wxO Agentic Workflow
Overview
Tools are executable units in watsonx Orchestrate that perform specific tasks (API calls, data processing, calculations) within isolated, secure environments. They are invoked by Agents or orchestrated by Flows.Tool Types in wxO
Choosing the Right Tool Type
wxO Flow (first choice): Native support for connections/security, Python code snippets, orchestration of wxO agents, tools and people (e.g. confirmation before transaction, custom forms), built-in document extraction and processing, LLM support. Python Tools: For custom libraries not in wxO Flow code blocks Langflow Tools: For Langflow-specific AI components or existing langflow flowsTool Type Comparison
| Tool Type | Runtime/Base | Timeout | Key Features | Best For |
|---|---|---|---|---|
| wxO Flow | Visual workflow | None | Stateful, resumable, orchestrates Agents/Tools/People, LLM support | Multi-step workflows, API integrations, human-in-loop, simple document processing |
| Python | Python 3.12 | 2 min | Stateless, read-only FS, outbound network only | Data processing, business logic, calculations |
| Langflow | Langflow 1.7.1 | 2 min | Stateless, read-only FS, outbound network only, 2+ sec initialization | LLM processing, RAG, complex document analysis |
| API | OpenAPI spec | 2 min (sync) / None (async) | External REST APIs, network-dependent | Third-party integrations, external services |
| MCP | MCP protocol | 2 min | Extended capabilities, varies by implementation | Custom integrations, protocol-based interactions |
Understanding Tool Performance
The Granularity Principle
Key Insight: Tool performance has two distinct components:-
Tool Call Overhead: The cost of invoking a tool
- Includes initialization, context setup, and result handling
- Exists for every tool invocation
- Relatively consistent per tool type
-
Execution Inside the Tool: The actual work being done
- Runs fast once inside the tool
- Varies based on the logic and operations
- Where optimization efforts should focus
- Multiple small tool calls = Multiple overhead costs
- Single larger tool call = One overhead cost, fast internal execution
- Recommendation: Combine related operations into single tools when possible
Performance Characteristics by Tool Type
Python Tools:- Call overhead: Present but minimal
- Internal execution: Very fast for most operations
- Overall: Fast to very fast
- Best for: Deterministic logic, data processing, API calls
- Call overhead: Includes Langflow initialization (2+ seconds due to initialization)
- Internal execution: Depends on LLM operations
- Overall: Slower due to LLM inference (if using LLM)
- Best for: LLM reasoning, NLP tasks, document analysis
Python & Langflow Tool Performance
Shared Technical Constraints
Both Python and Langflow tools operate in secure sandboxes with identical constraints:| Constraint | Impact | Best Practice |
|---|---|---|
| Isolated pod execution | Each tool instance runs in a separate pod for tenant isolation and security | Design for stateless, independent execution |
| 2 CPU cores maximum | Limited computational resources per tool instance | Optimize algorithms and avoid CPU-intensive operations |
| 2GB memory maximum | Limited memory per tool instance | Manage memory efficiently, avoid large data structures |
| 2-minute timeout | Maximum execution time per call | Design for timeout awareness, break long operations into chunks |
| No GPU access | GPU operations will fail | Use CPU-optimized algorithms and libraries only |
| Cold start penalty | First run takes longer; subsequent runs within 72 hours are faster | Expect initial latency; warm pods improve performance |
| Stateless execution | No state persists between calls | Use external storage (Redis, S3, database) |
| Read-only filesystem | Cannot write files locally | Use in-memory buffers or external storage |
| Network isolation | Outbound requests only | Design for outbound-only patterns |
Security Note: Due to tenant isolation requirements and the potentially unsecured nature of user-authored code, wxO runs each Python and Langflow tool instance as a separate pod with strict resource limits. This ensures security and prevents resource contention between tenants.
Performance Note: Tool pods experience cold start latency on first invocation. Once warmed up, the pod remains available for approximately 6 hours, with the time extended upon continuous use of the tool. This provides faster execution for subsequent calls. Plan for initial latency in performance testing and user experience design.
Python Tools
Execution Speed: Very fast (simple operations), fast (data processing), variable (external API calls) Optimization Strategies:- Minimize and batch external API calls
- Use efficient algorithms (O(n) vs O(n²))
- Choose performant libraries
- Implement external caching (Redis) for expensive operations
Langflow Tools
Execution Speed: Minimum 2+ seconds (initialization overhead per run), then variable depending on workflow complexity Key Consideration: Due to the initialization overhead on every run, Langflow tools are most effective for operations that take longer to execute (but remain under the 2-minute timeout). The initialization penalty becomes less significant when the actual workflow processing time is substantial.Note: LLM operations in Langflow are optional. Use Langflow when you need its specific AI components or have existing Langflow workflows, not solely for LLM capabilities.
- Use for longer-running operations where initialization overhead is proportionally smaller
- Minimize and combine LLM calls into single prompts (when using LLMs)
- Use concise, focused prompts with minimal context (when using LLMs)
- Choose smaller models for simple tasks, larger for complex reasoning (when using LLMs)
- Cache expensive operation results externally (Redis) for repeated queries
API & MCP Tool Performance
API Tools (OpenAPI-based)
Performance: Depends entirely on external service (network latency, service speed, payload size, rate limiting, authentication) Timeout:- Synchronous calls: 2 minutes (same as Python/Langflow/MCP)
- Asynchronous calls: None (via OpenAPI callback syntax)
MCP Tools (Model Context Protocol)
Performance: Varies by implementation (tool design, external dependencies, protocol overhead, resource requirements) Timeout: 2 minutes (same as Python/Langflow/synchronous API tools; MCP protocol does not support async)Summary
Key Points:- wxO Flow: First choice for most scenarios - supports connections, security, custom code, no timeout limit
- Python Tools: For custom libraries not supported in wxO Flow code blocks
- Langflow Tools: For Langflow-specific AI components
- API Tools: For OpenAPI-based external service integrations - performance depends on external service
- MCP Tools: For Model Context Protocol capabilities - performance varies by implementation
- Tool call overhead exists: Combine operations when possible
- 2-minute timeout: Applies to synchrous tools such as Python, Langflow, OpenAPI sync and MCP tools
Related Guides:

