Skip to main content

AI Agent

Description

The AI Agent step leverages Large Language Models (LLMs) with reasoning and function-calling capabilities to orchestrate complex automation tasks. Unlike static, linear scripts, this step dynamically interprets system and user prompts to determine the optimal execution path.

Upon analysis, the agent autonomously selects and triggers the necessary actions using two primary mechanisms:

  • AutomationEdge (AE) Workflows: For executing established enterprise automation processes.
  • Model Context Protocol (MCP) Tools: For standardized server-side tool interaction.

This architecture enables adaptive, goal-oriented automation where the model resolves logical dependencies and executes tools based on real-time context rather than pre-defined rigid rules.

Important:

Configuration

No.Field NameDescription
1Step nameSpecify a unique name for the step.
The name helps you identify the step in the workflow and makes it easier to debug or link it with other steps.

LLM Setting tab

For the LLM field details, click LLM setting

AE Configuration tab

The tab allows you to configure the connection to the AutomationEdge (AE) server. This tab establishes the identity and permissions the agent uses to execute workflows and manage approval tasks.

No.Field NameDescription
1AutomationEdge base URLSpecify the primary URL used to access the AutomationEdge UI (for example., https://ae-server.company.com:8443). The agent uses this endpoint for all API communication.
The field accepts variable or static values
2UsernameSpecify the login name with which you logon to AE UI.
The field accepts variable or static values
3PasswordSpecify the password to logon to AE UI.
The field accepts variable or static values
4Organization codeSpecify the Tenant Organization Code associated with your AutomationEdge instance.
The field accepts variable or static values
5User first & last nameSpecify the users first name and last name.
6Approval task templateSpecify the name of the template defined AutomationEdge UI to generate approval requests.
Note: The Task Template must already be created on the AE UI sever before running the workflow on the server.
7Approval task titleDefine a custom title for the approval tasks generated by this agent (for example, AI Agent Execution Request - [Context]). Clear titles help approvers understand the context immediately.
8Approver (Groups)Specify valid AutomationEdge user group names to assign the approval task to the groups.
Multi-value: Separate multiple groups with commas (for example, IT_Admins, Finance_Approvers).
9Approver (Users)Specify valid AutomationEdge usernames to assign the approval task.
Multi-value: Separate multiple usernames with commas.

Prompt Configuration tab

Prompt configuration helps AI to understand context and intent, producing accurate and meaningful responses. This section allows you to define the agent's persona, operational boundaries, and the specific task it must execute. System prompt defines the role and behavior, while user prompts provide specific instructions to guide its responses. For more understanding on System and User prompt, see Example to understand execution flow of the AI Agent step

Note: System prompt and Prompt support variable fields populated from the previous steps.

No.Field NameDescription
1System PromptSpecify System prompt.
System prompt defines the role, behaviour, and rules for the AI agent. The System Prompt sets the constitution for the model, determining how it interprets inputs and formatting its outputs.
Function: Establishes the baseline logic (e.g., "You are a Level 2 Support Agent responsible for analyzing firewall logs. Do not make assumptions.").
Priority: The system prompt has higher priority than user prompts and the context remain constant throughout the execution and takes precedence over conflicting instructions in the user prompt, acting as a safety guardrail.
2User PromptSpecify the prompt to query or task for the agent to process. The User Prompt provides the immediate context and goal for the current execution cycle.
Function: Defines the "What" of the operation (for example, "Analyze the following log entry: {Log_Variable} and determine if it is a security breach.").
Dynamic Usage: In automated workflows, this field typically contains variables (inputs) that change with every execution.
For a detailed breakdown of how these prompts interact during runtime, see Example to understand execution flow of the AI Agent step.
The field accepts variable/static values and field from previous step.

Memory Settings tab

Use the Memory Settings tab to enable contextual continuity. By default, the step is stateless, treating every execution as a new, isolated event. Enabling memory allows the agent to recall previous interactions, enabling multi-turn conversations and complex problem-solving across multiple iterations. Notes:

  • Stateless (Default): The step processes the current prompt in isolation without knowledge of past inputs.
  • Stateful (Memory Enabled): The step retrieves the history associated with the specific Conversation ID.
  • Retention Limit: The system stores the most recent 100 messages per Conversation ID.
No.Field NameDescription
1Conversation MemorySelect the checkbox Conversation Memory to enable the use of previous chat history, the step includes previous message history in the context window, allowing it to reference past queries and answers.
Note: Enabling this option makes the Conversation ID field mandatory.
2Conversion IDSpecify a unique identifier to tag the conversation thread. The step uses this ID to retrieve and update the specific chat history for the current session.

Knowledge Base tab

Use the Knowledge Base tab to provide the AI agent with contextual data (Grounding). By linking documents or references, you enable the agent to answer questions using specific enterprise data rather than relying solely on its general training.

No.Field NameDescription
1Use FilesSelect the checkbox Use Files to define knowledge sources directly within this step using local file paths.
When selected, enables the File or Folder table for direct path configuration.
When cleared, enables the Reference field to accept a Knowledge Base object from a previous workflow step.
Supported extensions:
- digital pdf
- html
- xml, csv
- json
- txt
- doc
- docx
2ReferenceSelect the variable containing a pre-processed Knowledge Base object. This allows the agent to utilize data ingested and indexed by a previous Knowledge Base step in the workflow.
Notes: Field enabled if Use Files is clear; accepts input only from Knowledge Base step.
3Use File as AttachmentDetermine how files are processed by the LLM.
Cleared(Default): Extracts and indexes text snippets.
Selected: Sends entire file to LLM.
Supported formats by provider:
OpenAI — jpg, jpeg, png, pdf
Azure OpenAI — jpg, jpeg, png
- Google Gemini: jpg, jpeg, png
- Google Vertex AI: pdf, jpg, jpeg, png, txt, json, csv, html, css, py
- AWS Bedrock: jpg, jpeg, png, digital pdf
4File or FolderSpecify the documents the agent must analyze.
Use this section to define the documents that the step must analyze during execution by specifying one or more files and/or folders in the File Path list.
The step supports multiple file and folder paths, enabling it to analyze content from different sources in a single execution and, during runtime, intelligently extract only the information relevant to the provided prompt from the selected files or folders.
To add inputs:
- Click File to select an individual file.
- Click Folder to select an entire folder.
Supported File Extensions:
You can add files with the following extensions in the File Path:
- digital pdf
- html
- xml
- csv
- json
- txt
- doc
- docx
Once selected, the corresponding paths are displayed in the File Path table for reference.
The field accepts variable or static values.

Tool Configuration tab

Use the Tool Configuration tab to select the AutomationEdge workflows and/or MCP servers that the step (AI Agent) can use as tools. Unlike standard automation steps that run sequentially, this agent uses intelligence to decide which tool to run—or whether to run a tool at all—based on the user's request. The agent analyzes the user's prompt and compares it to the Tool Descriptions configured in AutomationEdge or MCP Server. It only executes a tool if the description matches the user's intent. If no tool matches the request, the agent generates a text response using its internal knowledge (LLM only). For details about Agentic AI Tool Configuration see, How to use AutomationEdge workflows as tool. Note: If Conversation Memory is enabled, the agent can also find missing parameters in the previous chat history.

No.Field NameDescription
1AE ToolsSelect the specific AutomationEdge workflows to enable for this agent.
The step reads descriptions to understand purpose.
Notes: Only workflows with Use As Tool enabled are listed.
2RefreshSelect Refresh to update the list of available tools retrieved from AutomationEdge server and MCP servers.
3Tools Requiring Confirmation
Select the AE tools and MCP server tools that require user approval before execution.
Use this feature to require human approval for sensitive actions, for example deleting data or sending emails. Select the checkbox next to any tool that should not run automatically.
When the agent tries to use a selected tool, it pauses and waits for permission. The tool will only run after a user approves the request. If the user rejects it, the action is blocked.
Notes:
- If you are executing workflow from AE UI, then the task will be created for the tool in the defined template, and the status remains Awaiting Input until an approval action occurs.
- Click MCP Filter to filter the tools from the list.

MCP Server List

Click MCP Add to add MCP server details.
Use the MCP (Model Context Protocol) Server List to connect the agent to external tool ecosystems. An MCP server is a system that helps Agentic AI connect to external tools and services. When the AI receives a prompt, it sends the request to the MCP server. The server runs the required tool and sends back the results. This makes it easier for AI to perform tasks without setting up each tool separately.
The MCP server connects AI models to external tools and data sources by using a secure, standardized protocol.
For more details on MCP server see, MCP server section.

Note

Only MCP servers without authentication are supported.

No.Field NameDescription
iMCP Server NameSpecify the name used to identify the MCP server (e.g., OrderProcessing_MCP).
iiURLSpecify endpoint URL (e.g., https://mcp.example.com/api). Only servers without authentication supported.
iiiTestClick Test to verify connectivity.
ivViewClick View to inspect server details and available tools.

Output tab

Use the Output tab to save the results of the AI Agent's execution into variables. You can reference these field name in later steps of your workflow to use the AI's answer or track usage costs.

No.Field NameDescription
1ResponseSpecify a field name for the field that stores the AI's final answer. This variable captures the main text reply or the result of a tool execution.
For example, if you name this variable AnalysisResult, you can reference it in a later email step to send the answer to a user.
Default value: response
2Total tokensSpecify a field name to track the total number of tokens used during the execution. This value is the sum of both input and output tokens, representing the total processing size of the transaction.
Default value: totalTokens
3Input TokensSpecify a field name to count the tokens sent to the AI model. This count includes your prompts, system instructions, and any attached files or context used to ask the question.
Default value: inputTokens
4Output TokensSpecify a field name to count the tokens generated by the AI in its reply. This measures the length of the response or answer provided by the model..
Default value: outputTokens