Agents

Agents are the basic building block of any agentic workflow. The AI Agent Manager simplifies the creation of agents by providing a UI that lets you quickly define an agent with a system prompt that then, on its own, calls tools. This quick starts agentic workflows in a couple of minutes, not hours.

Custom agents

To create a custom agent, navigate to Administration > AI Agent Manager and click Add agent. Select an agent name and the type of agent. This introduction only describes text agents. For details about object agents, refer to Text and object agents. When selecting a name, remember that naming conflicts can occur with subscribed agents provided from applications or plugins. It is good practice to use a prefix other than c8y-, as this is the default prefix used by the platform.

Once the custom agent is created, you align it to your needs using the following tabs:

  • Test: Test your agent directly in the AI Agent Manager.
  • Settings: Allows to set settings like the maximum output tokens or temperature.
  • Test variables: Set variable values for your test.
  • System prompt: The system prompt of the agent. You align it and then test the changes. The system prompt persists only when you save it.
  • Tools: Assign tools to your agent.
  • Local provider: Allows to set different LLM provider or models for one agent ((read more)[(/ai/agents/#local-providers)]).
  • Advanced: Enables advanced settings in JSON format.

System prompts

The system prompt defines your agent’s behavior, expertise, and personality. It is the foundational instruction that shapes how the agent interprets user questions and formulates responses. Unlike user messages that change with each conversation, the system prompt remains constant and guides the agent throughout all interactions.

The system prompt defines the custom behavior of the agent

What to include in a system prompt:

  • Write clear and specific instructions about the agent’s role and purpose. For example, “You are a device troubleshooting assistant for industrial IoT equipment” is more effective than “You are helpful.”

  • Define the agent’s tone and communication style. Specify whether responses are formal, conversational, technical, or simplified for non-technical users.

  • Set boundaries and limitations. Explicitly state what the agent does not do or what topics it avoids. For example, “You do not provide financial advice or make purchasing decisions.”

  • Include domain knowledge and context. Add relevant background information about your IoT environment, device types, or specific terminology the agent needs to understand.

  • Specify output format preferences. Indicate whether responses are concise bullet points, detailed explanations, or follow a specific structure.

Do’s:

  • Be specific and concrete rather than vague or general.
  • Test different prompt variations to find what works best for your use case.
  • Include examples of desired behavior directly in the prompt.
  • Update the system prompt based on observed agent behavior.
  • Keep the prompt focused on a single, clear purpose.
  • Use the agent’s perspective (write as “You are…” not “The agent is…”).

Don’ts:

  • Avoid contradictory instructions that confuse the agent.
  • Do not make the prompt excessively long (generally stay under 2000 words).
  • Avoid assumptions about what the agent “knows” - be explicit.
  • Do not include user-specific information that changes per interaction (use variables instead).
  • Avoid overly complex or nested conditional logic.
  • Do not use ambiguous language that has multiple interpretations.

Variables

Variables allow you to inject dynamic data into your system prompt or user prompts at runtime. Instead of hardcoding specific values, you define placeholders that get replaced with actual values when the agent is called.

Defining variables in prompts

Use double curly brackets to define variables: {{variableName}}. You place variables anywhere in the system prompt or in API calls.

Example system prompt with variables:

You are a monitoring assistant for factory {{factoryId}}. When users ask about equipment, focus on devices in the {{location}} area. Current shift manager is {{shiftManager}}.

Providing variable values

When testing in the AI Agent Manager, use the Test variables tab to set values for your variables before testing the agent. You simply provide the variable as JSON where the key is the variable name and the value is the value you want the variable to be. For the above example you would need to add to the Test variables tab the following JSON:

{
  "factoryId": "FAC-001",
  "location": "Building A",
  "shiftManager": "John Smith"
}

When calling the agent via REST API, provide variables in the request body:

{
  "variables": {
    "factoryId": "FAC-001",
    "location": "Building A",
    "shiftManager": "John Smith"
  },
  "prompt": "What is the status of equipment in {{location}}?"
}

Use cases for variables

  • Personalizing responses with user-specific information (names, roles, preferences).
  • Contextualizing agents for different locations, facilities, or departments.
  • Injecting current state information that changes frequently.
  • Reusing the same agent configuration across multiple contexts.

Variables make agents flexible and reusable without requiring multiple agent configurations for similar use cases.

Important
System prompts and variables are vulnerable to prompt injection attacks. Always sanitize and validate any input used in prompts or variables, as the AI Agent Manager does not provide automatic protection. Learn more about prompt injection risks and mitigation strategies on the OWASP website.

Settings and advanced settings

The settings allow you to fine-tune the agent’s behavior using parameters from the Vercel AI SDK. These settings control aspects like response randomness, length limits, and provider-specific features.

Common settings

There are common settings that you can set in the Setting tab:

Parameter Range Description
maxOutputTokens Number (min: 1) Sets the maximum length of the response in tokens. Use this to enforce concise responses or prevent excessively long outputs.
temperature 0.0 to 1.0 Controls response randomness. Higher values make output more random, lower values more deterministic. Note: Cannot be used simultaneously with topP.
topP 0.0 to 1.0 Nucleus sampling parameter. Only tokens with top probability mass are considered (e.g., 0.1 = top 10%). Note: Cannot be used simultaneously with temperature.
topK Number (min: 1) Limits sampling to the top K options. For example, 40 means only the top 40 token options are considered.
presencePenalty -1.0 to 1.0 Encourages the model to introduce new topics. Positive values make the agent less likely to repeat topics already mentioned.
frequencyPenalty -1.0 to 1.0 Reduces repetition of tokens based on their frequency in the response. Positive values discourage repeated words.
seed Number (min: 0) Sets a seed for reproducible results. Using the same seed with the same inputs produces consistent outputs.
maxRetries Number (min: 0) Number of times to retry the request on failure. For example, 2 means the system will retry up to 2 times.
stopSequences Array of strings Sequences that, when generated, will stop the response. For example, “END” or “\n\n”. Only available for text agents.

Advanced settings

In the Advanced settings tab a user can specify more options, which don’t have yet an UI form. The options are based on the used Vercel AI SDK. For a complete list of available parameters and provider-specific options, refer to the Vercel AI SDK documentation.

Provider-specific options

Some AI providers support additional features configured through the providerOptions field. For example, Anthropic’s extended thinking mode:

{
  "anthropic": {
    "thinking": {
      "type": "enabled",
      "budgetTokens": 12000
    }
  }
}

When to use settings

  • Adjust temperature when responses are too random or too rigid.
  • Set maxTokens to control costs or enforce response brevity.
  • Use penalties to reduce repetitive language in longer conversations.
  • Enable provider-specific features for specialized capabilities.

Test different settings to find the optimal configuration for your use case.

Tools

Tools extend your agent’s capabilities by allowing it to access data and perform actions beyond generating text. In the Tools tab, you assign tools from configured MCP servers to your agent.

To assign tools to an agent:

  1. Navigate to the Tools tab in the agent configuration.
  2. Browse the list of available tools from configured MCP servers.
  3. Select the tools your agent needs to answer user questions.
  4. Click Save to update the agent configuration.

The agent automatically determines when to call assigned tools based on user questions and tool descriptions. For example, if you assign a “get device measurements” tool, the agent calls it when users ask about current temperature or sensor readings.

For detailed information about tools, configuring MCP servers, and how agents use tools, refer to Tools and MCP servers.

Info
Object agents cannot use custom tools. They use tools internally to structure responses according to their defined schema.

Test

The Test tab provides an interactive interface to test your agent directly in the AI Agent Manager. This allows you to validate the agent’s behavior, system prompt, and tool usage before deploying it in production.

To test an agent:

  1. Open the agent configuration in the AI Agent Manager.
  2. Navigate to the Test tab.
  3. If your agent uses variables, set them in the Test variables tab first.
  4. Enter a prompt in the chat interface.
  5. Review the agent’s response.

The test interface maintains conversation context, allowing you to have multi-turn conversations and verify the agent remembers previous messages.

Subscribed agents

Subscribed agents are AI agents that are provided by installed applications in your Cumulocity tenant. These agents are automatically available in the AI Agent Manager once the providing application or plugin is deployed.

What are subscribed agents?

When developers build applications or plugins that include AI functionality, they define agents to be exported. These agents are then “subscribed” to your tenant and appear in the AI Agent Manager list.

Subscribed agents come with predefined:

  • System prompts that define their behavior and expertise.
  • Tool configurations that allow them to interact with specific Cumulocity data or services.

Subscribed agents properties:

  • Pre-configured: Subscribed agents are fully configured by the application provider. You do not need to define system prompts or tools.

  • Application-specific: Each subscribed agent is designed for a specific use case within an application (for example, a device troubleshooting agent for a specific device management app).

  • Require global provider: Subscribed agents use your configured global provider unless they specify a local provider. Without a global provider configured, subscribed agents remain inactive.

  • Read-only configuration: You cannot modify the system prompt or tools of subscribed agents. However, you view their configuration to understand their capabilities. You also overwrite the subscribed agent with a custom agent. Custom agents with the same name as the subscribed one are preferred.

  • Automatic updates: When the providing application updates the agent definition, changes appear automatically in your AI Agent Manager as long as you haven’t overwritten it.

Viewing subscribed agents

To view subscribed agents:

  1. Navigate to Administration > AI Agent Manager.
  2. In the agents list, subscribed agents display with a badge indicating their source application.
  3. Click on a subscribed agent to view its details, including the system prompt and available tools.

You can investigate the system prompt of a subscribed agent

Testing subscribed agents

You test subscribed agents the same way as custom agents:

  1. Open the subscribed agent in the AI Agent Manager.
  2. Navigate to the Test tab.
  3. Enter a prompt and observe the agent’s response.

Overruling subscribed agents

While you cannot change the core configuration of subscribed agents, you align them by overruling the agent. Click the three dots next to the subscribed agent and select Clone agent. The cloned agent then aligns to your needs, and all subscribed agents by any app only use this new custom agent.

Subscribed agent versioning

While agents in general are not versioned, subscribed agents are versioned. They are provided by a custom or subscribed plugin that gets versioned, so the agents also exist in different versions. The AI Agent Manager shows the “latest” version of the plugin-agent. However, a custom application might use a different version. If the agent is overruled, always the custom user-defined agent is used.

Removing subscribed agents

Subscribed agents are removed automatically when you uninstall or remove the providing application. You cannot manually delete subscribed agents while their source application remains installed. If the source application is subscribed, you need to unsubscribe this application to uninstall the agent.

Text and object agents

The AI Agent Manager supports two base types of agents: text agents and object agents. Understanding the difference helps you choose the right type for your use case.

Text agents

Text agents return natural language responses as plain text. They are designed for conversational interactions where the agent provides explanations, answers, or guidance in a human-readable format.

Use cases:

  • Conversational interfaces where users ask questions in natural language.
  • Explanations and guidance for device troubleshooting.
  • General-purpose AI assistants that interact through chat.
  • Generating reports or summaries in text format.

Response format:

By default, text agents return plain text:

The current temperature is 23.5°C and the humidity level is at 45%.

Add the ?fullResponse=true query parameter to receive a JSON response with additional metadata, including tool calls, reasoning steps, and usage statistics.

API endpoint:

POST /service/ai/agent/text/{agent-name}

Object agents

Object agents return structured data in JSON format according to a predefined schema. They are designed for programmatic integrations where the response must follow a specific structure.

Use cases:

  • APIs that require structured responses.
  • Data extraction where specific fields must be populated.
  • Integration with other systems that expect JSON.
  • Form filling or data validation workflows.

Response format:

Object agents always return JSON that matches the defined schema:

{
  "temperature": 23.5,
  "humidity": 45,
  "status": "normal"
}

Schema definition:

When creating an object agent, define the expected response structure using JSON schema:

{
  "type": "object",
  "properties": {
    "temperature": {
      "type": "number",
      "description": "Current temperature in Celsius"
    },
    "humidity": {
      "type": "number",
      "description": "Current humidity percentage"
    },
    "status": {
      "type": "string",
      "enum": ["normal", "warning", "critical"]
    }
  },
  "required": ["temperature", "humidity", "status"]
}

The agent uses this schema to structure its response, ensuring consistent output format.

API endpoint:

POST /service/ai/agent/object/{agent-name}

UI support:

When creating an object agent a new Schema tab is shown. In this view you can define the correct schema and a validation will check, if you are following the right JSON Schema standard.

Key differences

Aspect Text agents Object agents
Response format Plain text (or JSON with ?fullResponse=true) Always JSON
Schema Not required Requires JSON schema
Tools Supports custom tools Cannot use additional tools (uses tools internally for structuring)
Use case Conversational AI Programmatic integration
Flexibility High - can adapt response format Low - follows strict schema

Choosing the right type

Choose text agents when:

  • Building conversational interfaces or chatbots.
  • Users need natural language explanations.
  • Response format varies based on context.
  • You want to use custom tools for data access.

Choose object agents when:

  • Integrating with APIs or other systems.
  • Response must follow a strict structure.
  • Extracting specific data fields from user input.
  • Building forms or structured data collection.

Testing agent types

You test both agent types in the AI Agent Manager:

  1. Navigate to Administration > AI Agent Manager.
  2. Create or open an agent.
  3. The agent type is selected during creation and cannot be changed later.
  4. For object agents, define the JSON schema in the configuration.
  5. Use the Test tab to verify responses match your expectations.

Converting between types

You cannot convert an existing agent from one type to another. To change the agent type, create a new agent with the desired type and copy the relevant configuration.

Local providers

Local providers allow you to configure agent-specific AI provider and model settings that override the global provider configuration. This enables you to use different AI models or providers for different agents based on their specific requirements.

What are local providers?

A local provider is an agent-specific configuration that defines:

  • Which AI provider to use (for example, OpenAI, Anthropic, Google).
  • Which model to use (for example, gpt-4, claude-3-7-sonnet).
  • Provider-specific settings like API keys, base URLs, or custom parameters.

When an agent has a local provider configured, it uses those settings instead of the global provider settings.

When to use local providers

Consider the following:

  • Different model requirements: Some use cases benefit from specific models. For example, use a faster, cheaper model for simple queries and a more powerful model for complex reasoning tasks.

  • Cost optimization: Route less critical agents to more cost-effective models while keeping important agents on premium models.

  • Provider-specific features: Access features only available from certain providers, such as extended thinking modes or specialized capabilities.

  • Testing and comparison: Test different models side-by-side by creating multiple agents with different local providers.

  • Separate billing: Use different API keys for different agents to track usage or allocate costs to different departments.

Global provider versus local provider

Aspect Global provider Local provider
Scope All agents without local providers Single agent only
Configuration location AI Agent Manager settings Individual agent settings
Fallback Used when no local provider is defined Overrides global provider
Use case Default for most agents Special requirements

Configuring a local provider

To configure a local provider for an agent:

  1. Navigate to Administration > AI Agent Manager.
  2. Open the agent you want to configure or create a new agent.
  3. In the agent configuration, expand the Local provider tab.

In this view you can configure your local provider (only with JSON). Therefore (depending on the provider to use) you can define a new provider, model or apiKey. This settings are always merged with the “global provider” and therefore allow to e.g. only overwrite the model to be used.

For example to use an Open AI API like LLM hosted for example on an own infrastructure, the following configuration could be used:

{
  "provider": "openai",
  "model": "my-custom-gpt",
  "baseURL": "https://your-custom-endpoint.com/v1",
  "strictMode": false
}

Testing local providers

After configuring a local provider:

  1. Navigate to the Test tab of the agent.
  2. Enter a test prompt.
  3. Verify the response uses the local provider.

Managing multiple local providers

You configure local providers independently for each agent. This allows you to:

  • Use OpenAI for agent A, Anthropic for agent B, and Google for agent C.
  • Test the same agent configuration with different models by creating duplicate agents with different local providers.
  • Maintain separate API keys for different use cases or cost centers.

Security considerations

  • API keys: Local provider API keys are stored securely in Cumulocity and cannot be read after configuration. Only users with appropriate permissions access local provider settings.

  • Access control: Ensure only authorized users have permission to configure agents and local providers, as this grants access to external AI services.

Removing a local provider

To remove a local provider and revert to the global provider simply empty the JSON object.

Troubleshooting local providers

  • Agent not responding: Verify the API key is valid and the provider account has sufficient credits.

  • Different results than expected: Check the model selection and advanced settings in the local provider configuration.

  • Provider not available: Ensure the provider is supported by the AI Agent Manager. For the current list of supported providers, refer to the Vercel AI SDK documentation.