Agentic AI
Introduction
This document outlines the features and usage of AutomationEdge’s Agentic AI plugin.
Agentic AI refers to an AI system that can understand a goal, analyze input, and take actions autonomously to achieve the desired outcome. Instead of responding to a single prompt, an agentic system can reason, make decisions, use tools, and adapt its behaviour based on context and available information.
In automation workflows, Agentic AI helps process complex tasks such as understanding user intent, retrieving relevant knowledge, generating content, and producing structured outputs with minimal manual intervention.
The Agentic AI plugin enables users to draft, customize, extract, and personalize content or text using advanced generative AI techniques.
The plugin consists of five key steps—AI Agent, Classifier, Knowledge Base, LLM, and Summarizer—each designed to process, analyze, and enhance input data intelligently to produce context-aware and high-quality outputs.
Common Tab Details: LLM Setting tab
The following table explains the LLM Settings tab and its field details, which is a common configuration available across steps.
| No. | Field Name | Description |
|---|---|---|
| LLM Settings tab: | The tab allows you to configure the Large Language Model (LLM) settings right from selecting the provider and specific model which will be used for reasoning and execution. This configuration establishes the "intelligence" layer used to interpret prompts and orchestrate tool usage. This is the mandatory tab. | |
| 1 | LLM provider | Select the LLM provider from the list. The LLM platform used to interpret prompts and orchestrate tool usage. The selected provider dictates available LLM models and supported properties (such as reasoning effort or response format). Note: Changing the LLM Provider refreshes the available properties. You will need to reconfigure dependent fields to match the selected provider's schema. Available LLM providers: - OpenAI - Azure OpenAI - Google Vertex AI (Only Gemini) - Google Gemini - AWS Bedrock Default value: OpenAI. The field accepts variable/static values, and the field is mandatory. |
| 2 | Model | Select the model to use for reasoning and task execution. The model determines the speed, cost, and intelligence level of the agent. The field accepts variable or static values, and the field is mandatory. Notes: - The list of available models populates dynamically based on the selected LLM Provider. - The field is enabled only for- OpenAI, Google Vertex AI, Google Gemini and AWS Bedrock. - For Azure OpenAI provider, the model name should be given against the deploymentName property. |
| 3 | Test | Click Test to validate the configurations. This action initiates a connectivity check to verify authentication and API accessibility, ensuring the agent can successfully interact with the LLM provider before runtime. |
| LLM Settings table | Use the table to define specific configuration parameters (such as temperature, max_tokens top_p, top_k, and so on.) that control the model's behavior and its output. | |
| 1.a | Property | Select the configuration parameter to define. The available properties vary based on the selected LLM Provider and the model. Notes: -Default Properties: Essential properties populate automatically based on the selected provider. For details, see LLM Configuration: Default Property table - Add Properties: To configure additional parameters, select an empty cell in the Property column and choose a value from the list. For property details see LLM Configuration: Additional Properties. |
| 1.b | Value | Specify the value for the selected property. Ensure the value adheres to the data type expected by the property (for example, an integer for max_tokens or a float for temperature). The field accepts variable or static value. |
| LLM Headers tab | Use this tab to configure custom HTTP headers required for the LLM API request. This is typically used for advanced authentication schemes, routing calls through API Gateways, ESB points, organization IDs, custom telemetry tracking, and so on. | |
| 1.a | Header Key | Specify the name of the HTTP header field (for example, X-Org-ID or Authorization). |
| 1.b | Header Value | Specify the value corresponding to the header key. |
Retry and Timeout Behavior:
The connect timeout and read timeout may vary due to internal retry attempts triggered by runtime exceptions. Each retry can reattempt the request, increasing total execution time. You can control this behavior using the retryAttempts configuration (where supported). Higher retry attempts improve fault tolerance but may extend execution time.
Retry Behavior by LLM Provider:
- OpenAI: Retries are attempted up to retryAttempts times. Retries occur only for RuntimeException.
- retryAttempts:1
- initialDelay: 1 second
- retryAttempt: Retry attempt number (starting from 1)
- maxDelay: ReadTimeout seconds
- Google Gemini: Retries are attempted up to retryAttempts times. Retries occur only for RuntimeException.
- retryAttempts: 1
- initialDelay: 1 second
- retryAttempt: Retry attempt number (starting from 1)
- maxDelay: readTimeout seconds
- Google Vertex AI: Retries are attempted up to retryAttempts times. Connect and read timeouts are applied per request attempt.
- Default retryAttempts: 1 (can be overridden)
- reqTime: connectTimeout + readTimeout
- Per attempt limit: Each attempt can run up to reqTime
- Total execution limit: retryAttempts × reqTime
- Backoff: Not applied (same timeout for every retry)
- Azure OpenAI: Retries are handled internally by the provider and may impact connect and read timeouts.
- Default internal retryAttempts: 4
- AWS Bedrock: Retries are handled internally by the provider and may impact connect and read timeouts.
- Default internal retryAttempts: 4