LLMSelect 1.0.0¶
Overview¶
v1.0.0 Native
Description¶
Allows an LLM to select an option from a list of options and execute the corresponding action.
Configuration Options¶
| Name | Data Type | Description | Default Value |
|---|---|---|---|
| use_thread_history | bool | Whether to include previous conversation history when making the selection | false |
Inputs¶
| Name | Data Type | Description |
|---|---|---|
| message | list[ContentItem] or str | User message either as a string or a list of ContentItems that will be processed by the LLM to select an option |
Outputs¶
| Name | Data Type | Description |
|---|---|---|
| selected_option | str | The name of the option selected by the LLM, which triggers execution of the corresponding action |
Examples¶
# Basic customer support routing
steps:
- name: route_inquiry
type: LLMSelect
config:
llm_config:
model: "gpt-4o"
api_key: "sk-proj-abc123..."
temperature: 0.3
max_tokens: 50
pre_prompt: "You are a customer support routing assistant. Analyze the customer inquiry and select the most appropriate department."
use_thread_history: false
options:
technical_support:
description: "Handle technical issues, bugs, and troubleshooting requests"
action: "route_to_technical"
billing_support:
description: "Handle billing questions, payment issues, and account problems"
action: "route_to_billing"
general_inquiry:
description: "Handle general questions and information requests"
action: "route_to_general"
inputs:
message: "My payment failed and I can't access my account features"
Error Handling¶
Option Selection Error
Error: LLM response did not match any of the options
Cause: The LLM's response doesn't exactly match any of the defined option names
Solution: Ensure option names are clear and descriptive. Consider using shorter, simpler names. Adjust the pre_prompt to emphasize returning exact option names only
Configuration Error
Error: LLM settings is unexpectedly None
Cause: The LLM configuration is missing required settings or the model initialization failed
Solution: Verify your llm_config contains valid model name, API key, and other required parameters. Check if the specified model is available with your API key
Token Limit Error
Error: Request exceeds maximum token limit
Cause: The input message combined with options descriptions exceeds the model's token limit
Solution: Reduce option descriptions length, shorten the pre_prompt, or use a model with higher token limits. Consider breaking complex selections into multiple steps
FAQ¶
How do I design effective options for LLM selection?
Create clear, distinct options that are easy for the LLM to differentiate:
- Use descriptive names: "technical_support" vs "tech" for clarity
- Provide detailed descriptions: Help the LLM understand when to select each option
- Avoid overlapping categories: Ensure options are mutually exclusive
- Keep it concise: Too many options can confuse the selection process
- Test with examples: Verify the LLM consistently selects correct options
Which LLM models work best for option selection?
Model selection depends on your specific requirements:
- GPT-4o: Best for complex decision-making with multiple criteria
- Claude-3.5-Sonnet: Excellent for nuanced categorization and context understanding
- GPT-3.5-Turbo: Cost-effective for simple binary or straightforward selections
For most routing tasks, GPT-3.5-Turbo provides sufficient accuracy at lower cost.
How should I handle selection errors and improve accuracy?
Implement strategies to improve selection reliability:
- Validate option names: Use simple, unambiguous naming conventions
- Add examples in pre_prompt: Show the LLM expected selection patterns
- Lower temperature: Use 0.1-0.3 for more consistent selections
- Fallback handling: Define default actions when no option matches
- Logging and monitoring: Track failed selections to identify patterns
When should I enable thread history vs keep it disabled?
Thread history usage depends on your workflow context:
- Enable for: Multi-turn conversations, progressive decision trees, context-aware routing
- Disable for: Independent classifications, stateless routing, high-volume processing
- Consider cost: History increases token usage and processing time
- Memory limits: Long conversations may exceed token limits
For most routing scenarios, disabling history improves performance and reduces costs.
How can I integrate LLMSelect with other workflow components?
LLMSelect works as a powerful routing mechanism in complex workflows:
- Conditional branching: Route to different processing pipelines based on content type
- Dynamic workflows: Combine with other LLM blocks for adaptive processing
- Human-in-the-loop: Route complex cases to manual review while automating simple ones
- Escalation patterns: Automatically escalate based on urgency or complexity analysis
- Load balancing: Distribute work across different processing resources
The selected option can trigger subsequent workflow steps or external system integrations.