Skip to content

LLMWithTools 1.0.0

Overview

Agent Beginner

v1.0.0 Native

Description

Processes chat messages using an LLM model with function calling capabilities, featuring two steps: chat for message processing and handle_tool_result for managing tool execution results.

⚠️ Deprecation Notice: This block is deprecated and will be removed in a future version. Use the LLM block instead for new implementations.

Configuration Options

NameData TypeDescriptionDefault Value
use_thread_historyboolWhether to include previous conversation history in the contextfalse

Inputs

NameData TypeDescription
messagelist[ContentItem] or ContentItem or strInput message that can be a string, single content item, or list of content items

Outputs

NameData TypeDescription
responseResponseSchemaTOutput of the LLM, which will be a string if the response schema is a string, or a dictionary if the response schema is an object

Examples

# DEPRECATED: Use LLM block instead
# Basic assistant with function calling capabilities
steps:
  - name: assistant_with_tools
    type: LLMWithTools
    config:
      llm_config:
        model: "gpt-4o"
        api_key: "sk-proj-abc123..."
        temperature: 0.3
        max_tokens: 500
        pre_prompt: "You are a helpful assistant with access to tools for calculations and data retrieval"
      use_thread_history: false
      response_schema:
        type: "string"
      tools:
        calculator:
          description: "Perform mathematical calculations"
          parameters:
            type: "object"
            properties:
              expression:
                type: "string"
                description: "Mathematical expression to evaluate"
        weather_lookup:
          description: "Get current weather information"
          parameters:
            type: "object"
            properties:
              location:
                type: "string"
                description: "City name or coordinates"
    inputs:
      message: "What's 25 * 34 and what's the weather like in San Francisco?"

Error Handling

Tool Call Argument Error

Error: LLM tried to call a tool but returned empty arguments

Cause: The LLM attempted to call a function but didn't provide the required parameters

Solution: Ensure tool descriptions clearly specify required parameters. Consider simplifying complex tool schemas or providing examples in the pre_prompt

Tool Execution Timeout

Error: Tool execution timed out or failed

Cause: A tool took too long to execute or encountered an internal error

Solution: Implement proper timeout handling in your tools, add error recovery mechanisms, and ensure tool functions are optimized for performance

Response Schema Mismatch

Error: Tool response doesn't match expected schema format

Cause: The tool's output format conflicts with the defined response_schema

Solution: Align tool output formats with your response schema, or use flexible schema definitions that accommodate various tool response types

FAQ

Why is this block deprecated and what should I use instead?

LLMWithTools is being phased out in favor of the newer, more flexible LLM block:

  • Improved architecture: The LLM block offers better tool management and execution
  • Enhanced performance: More efficient processing and reduced complexity
  • Better error handling: More robust error management and recovery mechanisms
  • Future compatibility: Ongoing updates and feature additions focus on the LLM block

Migrate existing workflows to use the LLM block for continued support and new features.

How do I design effective tools for LLM function calling?

Create clear, focused tools that the LLM can easily understand and use:

  • Single responsibility: Each tool should have one clear purpose
  • Clear descriptions: Explain exactly what the tool does and when to use it
  • Simple parameters: Use straightforward parameter names and types
  • Error handling: Include proper validation and error responses in your tools
  • Performance: Keep tool execution fast to avoid timeouts
Which LLM models work best with function calling?

Model performance varies for tool calling capabilities:

  • GPT-4o: Excellent function calling with complex tool schemas and multi-step operations
  • Claude-3.5-Sonnet: Strong performance with business tools and data analysis functions
  • GPT-3.5-Turbo: Reliable for simple tools but may struggle with complex parameter schemas

For mission-critical applications requiring precise tool usage, prefer GPT-4o or Claude-3.5-Sonnet.

How do I handle complex multi-step tool interactions?

Design your workflow to support complex tool orchestration:

  • Enable thread history: Let the LLM maintain context across tool calls
  • Tool chaining: Design tools that can work together sequentially
  • State management: Use tool results to inform subsequent tool calls
  • Conditional logic: Implement tools that can branch based on previous results
  • Error recovery: Include fallback tools for when primary tools fail
How can I migrate from LLMWithTools to the newer LLM block?

Follow these steps to migrate your existing implementations:

  • Review configuration: The LLM block has similar but updated configuration options
  • Update tool definitions: Tool schemas may need slight adjustments for the new format
  • Test thoroughly: Validate that all tool interactions work as expected
  • Update error handling: Adapt error handling for the new block's behavior
  • Performance tuning: Re-optimize temperature and token settings for the new block

The migration process typically involves minimal changes to your existing tool definitions.