Skip to main content

Overview

BeeAI Framework is an open-source, Linux Foundation-hosted framework for building production-grade AI agents with built-in constraint enforcement and rule-based governance. It supports both Python and TypeScript with native watsonx Orchestrate integration. This tutorial shows you how to create and deploy a BeeAI external agent in Python that integrates with watsonx Orchestrate.

Prerequisites

Before starting, ensure you have:

Development & Testing

1

Project setup

Create a project directory and initialize Poetry:
mkdir watsonx_orchestrate_and_beeai_framework
cd watsonx_orchestrate_and_beeai_framework
poetry init --no-interaction
2

Configure dependencies

Install dependencies and update the lock file:
poetry install --no-root
poetry lock
3

Configure your environment variables

Create a .env file in your project root and add your credentials:
touch .env
.env
# watsonx Configuration
WATSONX_URL=https://us-south.ml.cloud.ibm.com
WATSONX_REGION=us-south
WATSONX_PROJECT_ID=your-project-id-here
WATSONX_API_KEY=your-watsonx-api-key-here

# External Agent Security
API_KEY=your-secure-random-key

# Optional: Web Search
TAVILY_API_KEY=your-tavily-api-key-here
Where to find these values:
  • WATSONX_URL: Usually https://us-south.ml.cloud.ibm.com or your region’s URL
  • WATSONX_REGION: Your IBM Cloud region (e.g., us-south, eu-gb, jp-tok)
  • WATSONX_PROJECT_ID: Found in your watsonx.ai project settings
  • WATSONX_API_KEY: Create in IBM Cloud → Manage → Access (IAM) → API keys
  • API_KEY: Generate with openssl rand -base64 32 or use https://randomkeygen.com
  • TAVILY_API_KEY: Get free at https://tavily.com (optional)
4

Create project structure

Create the following directory structure:
mkdir wxo_beeai
touch wxo_beeai/__init__.py wxo_beeai/settings.py wxo_beeai/tools.py wxo_beeai/app.py Dockerfile docker-compose.yml
watsonx_orchestrate_and_beeai_framework/
├── wxo_beeai/
│   ├── __init__.py
│   ├── settings.py
│   ├── tools.py
│   └── app.py
├── .env
├── pyproject.toml
├── Dockerfile
└── docker-compose.yml
5

Create settings module

Create wxo_beeai/settings.py:This file loads environment variables from the .env file, validates required configurations with Pydantic, and provides type-safe access to settings across the app.
wxo_beeai/settings.py
from dotenv import load_dotenv
from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict

load_dotenv()

__all__ = ["AppSettings"]

class Settings(BaseSettings):
    model_config = SettingsConfigDict(env_prefix="", str_to_upper=False)
    api_key: str = Field(..., description="Project API Key")
    watsonx_url: str
    watsonx_project_id: str
    watsonx_api_key: str
    tavily_api_key: str
    log_intermediate_steps: bool = Field(default=False)
    watsonx_default_model: str = "ibm/granite-3-3-8b-instruct"

AppSettings = Settings()
6

Create custom tools

Create wxo_beeai/tools.py:This code defines a web search tool powered by the Tavily API. The @tool decorator registers it with the BeeAI Framework, and it’s configured to perform advanced searches returning up to 10 results.
wxo_beeai/tools.py
from beeai_framework.tools import tool
from tavily import TavilyClient
from wxo_beeai.settings import AppSettings

@tool
def search_web_tool(query: str) -> dict[str, str] | None:
    """
    Searches the web for the given query and returns the most relevant results.
    """
    client = TavilyClient(api_key=AppSettings.tavily_api_key)

    results = client.search(
        query,
        search_depth="advanced",
        max_results=10,
        include_images=False,
        include_answer=False,
    )

    return results

7

Create main application

Create wxo_beeai/app.py:The following code launches an HTTP server that exposes a RequirementAgent built for deep research.It connects key components — an unconstrained memory, a Watsonx chat model, reasoning tools (Think + Web Search), and runtime rules that guide the agent’s thinking process.
wxo_beeai/app.py
import logging
import sys
import traceback
import warnings

from beeai_framework.agents.requirement import RequirementAgent
from beeai_framework.agents.requirement.requirements.conditional import (
    ConditionalRequirement,
)
from beeai_framework.errors import FrameworkError
from beeai_framework.middleware.trajectory import GlobalTrajectoryMiddleware
from beeai_framework.tools import Tool
from beeai_framework.tools.think import ThinkTool
from beeai_framework.adapters.watsonx_orchestrate import (
    WatsonxOrchestrateServer,
    WatsonxOrchestrateServerConfig,
)
from beeai_framework.memory import UnconstrainedMemory
from beeai_framework.adapters.watsonx import WatsonxChatModel
from wxo_beeai.settings import AppSettings
from wxo_beeai.tools import search_web_tool

warnings.filterwarnings("ignore")

logger = logging.getLogger()
logger.setLevel(logging.INFO)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)

def main() -> None:

    memory = UnconstrainedMemory()

    llm = WatsonxChatModel(
        model_id=AppSettings.watsonx_default_model.removeprefix("watsonx/").lower(),
        api_key=AppSettings.watsonx_api_key,
        project_id=AppSettings.watsonx_project_id,
        base_url=AppSettings.watsonx_url,
        middlewares=[
            GlobalTrajectoryMiddleware(enabled=AppSettings.log_intermediate_steps)
        ],
    )

    agent = RequirementAgent(
        llm=llm,
        memory=memory,
        tools=[ThinkTool(), search_web_tool],
        role="a deep researcher",
        instructions=[
            "Your task is to conduct in-depth research on the given topic.",
            "Before you start, thoroughly prepare a step-by-step plan for how you will solve the task.",
            "After each action, reflect on what you have obtained and what you need to do to gather evidence for the final answer.",
        ],
        middlewares=[
            GlobalTrajectoryMiddleware(
                included=[Tool], enabled=AppSettings.log_intermediate_steps
            )
        ],
        requirements=[
            ConditionalRequirement(ThinkTool, consecutive_allowed=False),
            ConditionalRequirement(
                ThinkTool, force_at_step=1, force_after=[search_web_tool]
            ),
        ],
    )

    config = WatsonxOrchestrateServerConfig(
        port=8080, host="0.0.0.0", api_key=AppSettings.api_key
    )

    server = WatsonxOrchestrateServer(config=config)
    server.register(agent)
    server.serve()

if __name__ == "__main__":
    try:
        main()
    except FrameworkError as e:
        traceback.print_exc()
        sys.exit(e.explain())
Key components:
  • Memory — UnconstrainedMemory: Stores the full conversation history
  • LLM — WatsonxChatModel: The Watsonx chat model, configured from AppSettings, with middleware for logging intermediate steps
  • Agent — RequirementAgent: The main reasoning agent with planning capabilities, research-oriented instructions, and tool access
  • Tools — ThinkTool() and search_web_tool: Enable structured reasoning and online information retrieval
  • Requirements — ConditionalRequirement rules:
    • Enforces “thinking” at step 1
    • Prevents consecutive Think tool calls
    • Requires reflection after searches
  • Middleware — GlobalTrajectoryMiddleware: Logs intermediate reasoning and tool usage when enabled in settings
  • Server — WatsonxOrchestrateServer: Registers the agent and serves it via HTTP (0.0.0.0:8080) using the configured API key
  • Error handling — Catches FrameworkError, prints a traceback, and exits cleanly with a clear message
8

Docker configuration

Create Dockerfile:
Dockerfile
FROM python:3.11-slim AS python-base

# https://python-poetry.org/docs#ci-recommendations
ENV POETRY_VERSION=2.1.1
ENV POETRY_HOME=/opt/poetry
ENV POETRY_VENV=/opt/poetry-venv

# Tell Poetry where to place its cache and virtual environment
ENV POETRY_CACHE_DIR=/opt/.cache

# Create stage for Poetry installation
FROM python-base AS poetry-base

# Creating a virtual environment just for poetry and install it with pip
RUN python3 -m venv $POETRY_VENV \
	&& $POETRY_VENV/bin/pip install -U pip setuptools \
	&& $POETRY_VENV/bin/pip install poetry==${POETRY_VERSION}

# Create a new stage from the base python image
FROM python-base AS example-app

# Copy Poetry to app image
COPY --from=poetry-base ${POETRY_VENV} ${POETRY_VENV}

# Add Poetry to PATH
ENV PATH="${PATH}:${POETRY_VENV}/bin"

WORKDIR /app

# Copy Dependencies
COPY poetry.lock pyproject.toml ./

# Tell Poetry to not use its own venv
RUN poetry config virtualenvs.create false

# [OPTIONAL] Validate the project is properly configured
RUN poetry check

# Copy Application
COPY . /app

# Install Dependencies
RUN poetry install --no-interaction --no-cache

# Run Application
EXPOSE 8080

ENTRYPOINT ["poetry", "run", "python", "wxo_beeai/app.py"]
Create docker-compose.yml:
docker-compose.yml
services:
  app:
    container_name: wxo_beeai_test
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8080:8080"
    env_file:
      - .env
    environment:
      - PYTHONUNBUFFERED=1
    restart: no
9

Test locally

Run your agent locally with Docker Compose:
docker-compose up --build
Or run directly with Poetry:
poetry run python wxo_beeai/app.py
Tip: Visit http://localhost:8080/docs to see the FastAPI documentation and test the /chat/completions endpoint.Test with curl:
curl -X POST "http://localhost:8080/chat/completions" \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "messages": [
      {"role": "user", "content": "What are the latest developments in quantum computing?"}
    ]
  }'

Production Deployment

1

Deploy to IBM Code Engine

Create a Code Engine project

  1. Navigate to IBM Cloud Code Engine Projects
  2. Click Create and name your project (e.g., wxo-beeai)

Create registry secret

  1. Go to Manage → Access (IAM) → API keys
  2. Click Create and save the API key

Create the application

  1. In your Code Engine project, select Applications → Create
  2. Under Code, select Build container image from source code
  3. Configure build details:
    • SSH secret: None
    • Context directory: Your GitHub repository URL
    • Branch: main
    • Dockerfile: Dockerfile
    • Registry secret: Create using your API key
  4. Configure application:
    • Name: beeai-agent
    • Domain mappings: Public
  5. Add environment variables:
    • WATSONX_URL
    • WATSONX_REGION
    • WATSONX_PROJECT_ID
    • WATSONX_API_KEY
    • API_KEY
    • TAVILY_API_KEY (optional)
  6. Click Create
Visit your application URL with /docs appended, e.g. https://beeai-agent.xxxxx.codeengine.appdomain.cloud/docs
2

Register external agent in watsonx Orchestrate

  • In watsonx Orchestrate, click the hamburger menu → Agent Configuration
  • Select Assistants → Add assistant → External Assistant
  • Check External-agent Assistant
  • Configure:
    • Display Name: Research Agent
    • Description: Agent to conduct deep research using web search
    • API Key: Your API_KEY from .env
    • Service Instance URL: https://your-app-url/chat/completions
  • Click Save
3

Use your agent

Use from the GUI, or API
  1. Navigate to Chat in watsonx Orchestrate
  2. Ask research questions that route to your agent:
    • What are the latest developments in quantum computing?
    • Research the current state of renewable energy adoption.
    • What are the recent breakthroughs in AI?
The BeeAI agent will:
  1. Create a research plan
  2. Search the web for relevant information
  3. Reflect on findings after each search
  4. Provide a research-based answer

Next Steps