Skip to main content

LLM

Description

The LLM step connects to a Large Language Model to generate responses for user queries. You define a system prompt that dictates the model's behavior, tone, and output format. This configuration ensures the model produces structured output that integrates correctly into your workflow.

Example using LLM step: The process is to detect urgent customer emails and use the LLM plugin step to convert informal complaints into structured internal alerts for quick team action. The process performs the following actions:

  1. Workflow detects an inbound customer email with an urgent tone.
  2. LLM plugin steps converts informal complaint into a professionally formatted, structured summary.
  3. Generates an internal alert containing the summary for the support team.
  4. Ensures the support team can quickly understand and act on the issue. The workflow successfully transformed urgent customer emails into structured, professional summaries, enabling the support team to quickly understand and address issues.

Configurations

No.Field NameDescription
1Step nameSpecify a unique name for the step.
The name helps you identify the step in the workflow and makes it easier to debug or link it with other steps

LLM Setting tab

For the LLM field details, click LLM setting

Input tab

Prompt configuration helps AI to understand context and intent, producing accurate and meaningful responses. This section allows you to define the agent's persona, operational boundaries, and the specific task it must execute. System prompt defines the role and behavior, while user prompts provide specific instructions to guide its responses. Note: System prompt and Prompt supports variable and previous field For more understanding on System and User prompt, see Example to understand execution flow of the AI Agent step System prompt and User Prompt supports variable and previous field.

No.Field NameDescription
1System PromptSpecify System prompt.
System prompt defines the immutable role, behaviour, and rules for the AI agent. The System Prompt sets the constitution for the model, determining how it interprets inputs and formatting its outputs.
- Function: Establishes the baseline logic (e.g., "You are a Level 2 Support Agent responsible for analyzing firewall logs. Do not make assumptions.").
- Priority: The system prompt has higher priority than user prompts and This context remain constant throughout the execution and takes precedence over conflicting instructions in the user prompt, acting as a safety guardrail.
2User PromptSpecify the prompt to query or task for the agent to process. The User Prompt provides the immediate context and goal for the current execution cycle.
- Function: Defines the "What" of the operation (for example, "Analyze the following log entry: {Log_Variable} and determine if it is a security breach.").
- Dynamic Usage: In automated workflows, this field typically contains variables (inputs) that change with every execution.
The field accepts variable/static values and field from previous step.

File attachment(s) tab

  • Use the tab to provide the AI agent with contextual data (Grounding). By linking documents or references, you enable the agent to answer questions using specific enterprise data rather than relying solely on its general training.
No.Field NameDescription
1Use File as AttachmentDetermine how files are processed by the Large Language Model (LLM).
- Cleared (Default): The system extracts and indexes text from the files. Only the specific snippets relevant to the user's prompt are sent to the LLM. This is best for large documents where you need specific answers.
- Selected: The system sends the entire file to the LLM as an attachment object. This is required for tasks involving image analysis (OCR) or when the LLM needs the full document context at once.
Supported file formats by LLM Provider:
- OpenAI: jpg, jpeg, png, pdf
- Azure OpenAI: jpg, jpeg, png
- Google Gemini: jpg, jpeg, png
- Google Vertex AI: pdf, jpg, jpeg, png, txt, json, csv, html, css, py
- AWS Bedrock: jpg, jpeg, png, digital pdf
2File or FolderSpecify the documents the agent must analyze.
Use this section to define the documents that the step must analyze during execution by specifying one or more files and/or folders in the File Path list.
The step supports multiple file and folder paths, enabling it to analyze content from different sources in a single execution and, during runtime, intelligently extract only the information relevant to the provided prompt from the selected files or folders.
To add inputs:
- Click to select an individual file.
- Click to select an entire folder.
Once selected, the corresponding paths are displayed in the File Path table for reference.
Supported File Extensions:
You can add files with the following extensions in the File Path:
- digital pdf
- html
- xml
- csv
- json
- txt
- doc
- docx
The field accepts variable or static values.

Output tab

Use the Output tab to save the results of the AI Agent's execution into variables. You can reference these variables in later steps of your workflow to use the AI's answer or track usage costs.

No.Field NameDescription
1ResponseSpecify a name for the variable that stores the AI's final answer. This variable captures the main text reply or the result of a tool execution.
For example, if you name this variable AnalysisResult, you can reference it in a later email step to send the answer to a user.
Default value: response
2Total tokensSpecify a field name to track the total number of tokens used during the execution. This value is the sum of both input and output tokens, representing the total processing size of the transaction.
Default value: totalTokens
3Input tokensSpecify a field name to count the tokens sent to the AI model. This count includes your prompts, system instructions, and any attached files or context used to ask the question.
Default value: inputTokens
4Output tokensSpecify a field name to count the tokens generated by the AI in its reply. This measures the length of the response or answer provided by the model.
Default value: outputTokens