Chapter 320m

Complex tool definitions

Complex tool definitions

The @function_tool decorator gets you started quickly, but production agents need more control. You may need to define tool schemas programmatically, pass complex nested parameters, group tools into reusable sets, or control whether the LLM should speak while a tool runs. This chapter covers all four patterns.

Programmatic toolsraw_schemaToolsetToolFlag

What you'll learn

  • How to create tools programmatically without decorators
  • How to define complex parameter types with raw JSON schema
  • How to group and manage tools with Toolset
  • How to control tool execution behavior with ToolFlag

Programmatic tool creation

Decorators are convenient when you know your tools at class definition time. But sometimes you need to create tools dynamically — from a database of available actions, from an API specification, or based on runtime configuration. For these cases, you can create tools programmatically.

programmatic_tools.pypython
from livekit.agents import Agent, Tool, RunContext


async def check_menu_handler(context: RunContext, category: str) -> str:
  """Check available menu items for a category."""
  menu = {
      "appetizers": ["Bruschetta", "Soup of the Day"],
      "mains": ["Grilled Salmon", "Ribeye Steak"],
      "desserts": ["Tiramisu", "Cheesecake"],
  }
  items = menu.get(category, [])
  return f"Available {category}: {', '.join(items)}" if items else "Category not found"


check_menu_tool = Tool.create(
  name="check_menu",
  description="Check available menu items for a given category",
  handler=check_menu_handler,
  parameters={
      "category": {
          "type": "string",
          "description": "The menu category (appetizers, mains, or desserts)",
          "enum": ["appetizers", "mains", "desserts"],
      }
  },
)


class OrderTakerAgent(Agent):
  def __init__(self):
      super().__init__(
          instructions="You are the order taker at Bella Vista...",
          tools=[check_menu_tool],
      )
programmaticTools.tstypescript
import { Agent, Tool, RunContext } from "@livekit/agents";

const checkMenuTool = Tool.create({
name: "check_menu",
description: "Check available menu items for a given category",
handler: async (context: RunContext, { category }: { category: string }) => {
  const menu: Record<string, string[]> = {
    appetizers: ["Bruschetta", "Soup of the Day"],
    mains: ["Grilled Salmon", "Ribeye Steak"],
    desserts: ["Tiramisu", "Cheesecake"],
  };
  const items = menu[category] ?? [];
  return items.length > 0
    ? `Available ${category}: ${items.join(", ")}`
    : "Category not found";
},
parameters: {
  category: {
    type: "string",
    description: "The menu category (appetizers, mains, or desserts)",
    enum: ["appetizers", "mains", "desserts"],
  },
},
});

class OrderTakerAgent extends Agent {
constructor() {
  super({
    instructions: "You are the order taker at Bella Vista...",
    tools: [checkMenuTool],
  });
}
}
What's happening

Tool.create() gives you the same result as @function_tool, but you define the name, description, handler, and parameters separately. This is useful when the tool definition comes from an external source, when you want to share a handler across multiple tool definitions, or when you need parameter constraints like enum that type hints alone cannot express.

Raw JSON schema for complex parameters

Some tools need parameters that go beyond simple strings and numbers. A tool that places an order might need nested objects, arrays of items, or conditional fields. The raw_schema option lets you pass a full JSON Schema definition.

raw_schema_tool.pypython
from livekit.agents import Tool, RunContext


async def place_order_handler(context: RunContext, **kwargs) -> str:
  items = kwargs.get("items", [])
  special_requests = kwargs.get("special_requests", "")
  total = sum(item.get("price", 0) * item.get("quantity", 1) for item in items)
  item_names = [f"{item['quantity']}x {item['name']}" for item in items]
  return f"Order placed: {', '.join(item_names)}. Total: ${total:.2f}."


place_order_tool = Tool.create(
  name="place_order",
  description="Place a complete food order with multiple items",
  handler=place_order_handler,
  raw_schema={
      "type": "object",
      "properties": {
          "items": {
              "type": "array",
              "description": "List of order items",
              "items": {
                  "type": "object",
                  "properties": {
                      "name": {
                          "type": "string",
                          "description": "Menu item name",
                      },
                      "quantity": {
                          "type": "integer",
                          "description": "Number of this item",
                          "minimum": 1,
                      },
                      "price": {
                          "type": "number",
                          "description": "Price per item",
                      },
                      "modifications": {
                          "type": "array",
                          "items": {"type": "string"},
                          "description": "Special modifications",
                      },
                  },
                  "required": ["name", "quantity", "price"],
              },
          },
          "special_requests": {
              "type": "string",
              "description": "Any special requests for the entire order",
          },
      },
      "required": ["items"],
  },
)
rawSchemaTool.tstypescript
import { Tool, RunContext } from "@livekit/agents";

const placeOrderTool = Tool.create({
name: "place_order",
description: "Place a complete food order with multiple items",
handler: async (context: RunContext, params: any) => {
  const items = params.items ?? [];
  const total = items.reduce(
    (sum: number, item: any) => sum + (item.price ?? 0) * (item.quantity ?? 1),
    0
  );
  const itemNames = items.map((item: any) => `${item.quantity}x ${item.name}`);
  return `Order placed: ${itemNames.join(", ")}. Total: $${total.toFixed(2)}.`;
},
rawSchema: {
  type: "object",
  properties: {
    items: {
      type: "array",
      description: "List of order items",
      items: {
        type: "object",
        properties: {
          name: { type: "string", description: "Menu item name" },
          quantity: { type: "integer", description: "Number of this item", minimum: 1 },
          price: { type: "number", description: "Price per item" },
          modifications: {
            type: "array",
            items: { type: "string" },
            description: "Special modifications",
          },
        },
        required: ["name", "quantity", "price"],
      },
    },
    special_requests: {
      type: "string",
      description: "Any special requests for the entire order",
    },
  },
  required: ["items"],
},
});

Raw schema bypasses validation

When you use raw_schema, the framework sends your schema directly to the LLM without deriving it from type hints. You are responsible for ensuring the schema is valid JSON Schema. Mistakes surface as confusing LLM behavior — the model will pass wrong types or miss required fields. Test thoroughly.

Toolset: managing groups of tools

When agents share tools or when you want to add and remove tools as a group, wrapping them in a Toolset keeps things organized.

toolset_example.pypython
from livekit.agents import Agent, Toolset, function_tool, RunContext


# Create a reusable toolset for menu operations
menu_toolset = Toolset()


@menu_toolset.tool
async def check_menu(context: RunContext, category: str) -> str:
  """Check available menu items for a category."""
  return "Bruschetta, Soup of the Day, Caesar Salad"


@menu_toolset.tool
async def check_price(context: RunContext, item_name: str) -> str:
  """Check the price of a specific menu item."""
  prices = {"Bruschetta": 12, "Grilled Salmon": 28, "Tiramisu": 10}
  price = prices.get(item_name)
  return f"{item_name}: ${price}" if price else f"{item_name} not found"


@menu_toolset.tool
async def check_allergens(context: RunContext, item_name: str) -> str:
  """Check allergen information for a menu item."""
  return f"{item_name}: Contains gluten and dairy."


# Both agents can use the same toolset
class OrderTakerAgent(Agent):
  def __init__(self):
      super().__init__(
          instructions="You are the order taker at Bella Vista...",
          tools=[menu_toolset],
      )


class GreeterAgent(Agent):
  def __init__(self):
      super().__init__(
          instructions="You are the greeter at Bella Vista...",
          tools=[menu_toolset],
      )
toolsetExample.tstypescript
import { Agent, Toolset, RunContext } from "@livekit/agents";

const menuToolset = new Toolset();

menuToolset.addTool({
name: "check_menu",
description: "Check available menu items for a category.",
parameters: { category: { type: "string", description: "The menu category" } },
execute: async (context: RunContext, { category }: { category: string }) => {
  return "Bruschetta, Soup of the Day, Caesar Salad";
},
});

menuToolset.addTool({
name: "check_price",
description: "Check the price of a specific menu item.",
parameters: { item_name: { type: "string", description: "Name of the menu item" } },
execute: async (context: RunContext, { item_name }: { item_name: string }) => {
  const prices: Record<string, number> = { Bruschetta: 12, "Grilled Salmon": 28, Tiramisu: 10 };
  const price = prices[item_name];
  return price !== undefined ? `${item_name}: $${price}` : `${item_name} not found`;
},
});

// Both agents share the same toolset
class OrderTakerAgent extends Agent {
constructor() {
  super({
    instructions: "You are the order taker at Bella Vista...",
    tools: [menuToolset],
  });
}
}
What's happening

A Toolset is a container for tools that can be passed as a single unit to an agent's tools list. This is useful for organizing related tools (all menu operations in one set, all payment operations in another) and for sharing tool groups across agents. You can also add and remove entire toolsets dynamically, which you will see in the dynamic tools chapter.

ToolFlag: controlling tool behavior

By default, when the LLM calls a tool, it waits for the result before generating its next response. But some tools are fast lookups where you want the LLM to keep talking while the tool runs. Others are slow operations where you want to suppress speech until the result arrives. ToolFlag gives you this control.

tool_flags.pypython
from livekit.agents import Agent, function_tool, RunContext, ToolFlag


class OrderTakerAgent(Agent):
  def __init__(self):
      super().__init__(
          instructions="You are the order taker at Bella Vista...",
      )

  @function_tool(flags=ToolFlag.RUN_IMMEDIATELY)
  async def log_order_event(self, context: RunContext, event: str) -> str:
      """Log an order event for analytics. Does not affect the conversation."""
      print(f"Order event: {event}")
      return "Logged"

  @function_tool(flags=ToolFlag.REQUIRE_RESULT)
  async def calculate_total(self, context: RunContext, item_ids: list[str]) -> str:
      """Calculate the order total. Wait for the result before responding."""
      total = len(item_ids) * 15.99
      return f"Order total: ${total:.2f} (before tax)"
toolFlags.tstypescript
import { Agent, functionTool, RunContext, ToolFlag } from "@livekit/agents";

class OrderTakerAgent extends Agent {
constructor() {
  super({
    instructions: "You are the order taker at Bella Vista...",
  });
}

@functionTool({
  description: "Log an order event for analytics.",
  flags: ToolFlag.RUN_IMMEDIATELY,
})
async logOrderEvent(context: RunContext, event: string): Promise<string> {
  console.log(`Order event: ${event}`);
  return "Logged";
}

@functionTool({
  description: "Calculate the order total. Wait for the result before responding.",
  flags: ToolFlag.REQUIRE_RESULT,
})
async calculateTotal(context: RunContext, itemIds: string[]): Promise<string> {
  const total = itemIds.length * 15.99;
  return `Order total: $${total.toFixed(2)} (before tax)`;
}
}

The key flags to know:

1

RUN_IMMEDIATELY

The tool executes without blocking the LLM's response generation. Use this for fire-and-forget operations like analytics logging or cache warming where the result does not affect the conversation.

2

REQUIRE_RESULT

The LLM must wait for the tool result before generating any response. This is the default behavior for most tools, but you can set it explicitly when the result is critical and the LLM must not start speaking prematurely.

Match flags to user expectations

If the user is waiting for a price calculation, use REQUIRE_RESULT so the agent does not say "Let me check that" and then go silent. If the tool is just logging something internally, use RUN_IMMEDIATELY so the conversation keeps flowing without an awkward pause.

What you learned

  • Tool.create() builds tools programmatically when decorators are not flexible enough
  • raw_schema lets you define complex nested parameter types with full JSON Schema
  • Toolset groups related tools for reuse across agents and bulk management
  • ToolFlag controls whether the LLM waits for a tool result or keeps talking

Test your knowledge

Question 1 of 2

When should you use raw_schema instead of the standard parameters option in Tool.create()?

Next up

Tools return simple strings. But what if you need the LLM to collect structured data — a complete order item with name, quantity, and modifications — through conversation? That is what AgentTask is for, and it is the subject of the next chapter.

Concepts covered
Programmatic toolsraw_schemaToolsetToolFlag