Skip to content

Analyzer 1.0.0

Overview

Agent Beginner

Version Source

Description

Analyzes the user's message and returns a response with sources

Configuration Options

No configuration options available.

Inputs

NameData TypeDescription
documentslist[DataSetItem]List of relevant documents retrieved from the knowledge base that will be used as context for the LLM
messagelist[ContentItem] or strUser message either as a string or a list of ContentItems that will be processed by the LLM

Outputs

NameData TypeDescription
sourceslist[Source]List of sources referenced in the LLM response, with citation indexes and metadata linking back to original documents
contentResponseSchemaTLLM-generated response content, either as free-form text or structured data based on the response_schema configuration

Examples


# Basic text analysis with free-form response
- name: analyze_customer_feedback
  type: Analyzer
  config:
    llm_config:
      model: "gpt-4"
      temperature: 0.3
      pre_prompt: "Analyze the customer feedback and provide insights on sentiment and key themes"
    response_schema:
      type: "string"
  inputs:
    documents: "{{ knowledge_base.outputs.documents }}"
    message: "What are the main pain points mentioned in recent customer feedback?"
    

Error Handling

LLMError

Error Code
LLMError: Agent response is null
Common Cause
LLM returns null or empty content in response, often due to model overload or configuration issues
Solution
Verify LLM configuration, check model availability, and ensure adequate token limits

BlockError

Error Code
BlockError: LLM tried to call a tool but returned empty arguments
Common Cause
Model attempts to use function calling but provides malformed or empty tool arguments
Solution
Adjust response_schema complexity, increase temperature slightly, or switch to a more capable model

JSON Validation Error

Error Code
ValidationError: JSON parsing failed
Common Cause
LLM generates response that doesn't match the specified response_schema format
Solution
Simplify response_schema structure, provide clearer pre_prompt instructions, or use string type for complex outputs

FAQ

Why is this block marked as deprecated?

The Analyzer block will be removed in a future version. Use the more flexible LLM block instead, which provides the same functionality with better performance and more features.

How do source citations work?

The block automatically appends document context to your message and instructs the LLM to cite sources. For string responses, sources are listed separately. For structured responses, citations appear inline as "(source_1)" and are converted to numbered references in the final output.

What's the difference between string and structured response schemas?

String schemas (type: "string") return free-form text with separate source listings. Structured schemas return JSON objects with inline citations. Use string for narrative responses and structured for data extraction or classification tasks.

How can I improve response quality?

Use specific pre_prompts, provide relevant document context, set appropriate temperature (0.1-0.3 for factual analysis, 0.5-0.8 for creative tasks), and ensure your response_schema matches the complexity of the expected output.

What happens if no documents are provided?

The block can still process the message, but will have no document context for citations. The LLM will respond based solely on the message content and any pre_prompt instructions, with an empty sources array in the output.