Skip to main content
Atlas AI agents use language models to solve industrial problems by querying your knowledge graph. The quality of your prompts directly affects the accuracy and usefulness of agent responses. Effective prompts help agents understand what you need and deliver relevant information for tasks like maintenance planning, equipment monitoring, and operational decision-making. Effective prompts can also help produce accurate results faster, reduce the need for follow-up questions, and help you get more value from your industrial data.

How agent configuration affects prompting

When you build agents, your configuration settings affect how they interpret and respond to your prompts.

Language models

The language model you select affects how agents interpret your prompts and generate responses. Different models have different strengths for different types of tasks. Select a language model that matches your use case and response time needs. More advanced models can provide more detailed responses, but can respond more slowly than less advanced models. See language models for the list of available models.

Instructions and goals

When building agents, you define goals and instructions.
  • Goals define what you want the agent to accomplish and the desired outcome.
  • Instructions define how to achieve the goal and specify the workflow, scope, and approach.
Effective instructions are specific and focused. General instructions apply to all agent actions, while tool-specific instructions guide behavior for specific tasks. Learn more about configuring agent instructions in Build and publish agents.

Tools

Agents use tools to access Cognite Data Fusion (CDF) data, run calculations, or integrate with external systems. The tools you enable control what tasks agents can perform. When building agents, select only the tools that you need for your use case to help improve agent response quality. See available tools for descriptions of each tool you can enable.

Writing effective prompts

Apply the following principles to write prompts that can produce accurate, relevant responses.

Be specific and clear

Specific prompts can produce better results than vague ones. Use action verbs to clarify what you want the agent to do.
Tell me about Pump P-101.
This vague prompt doesn’t give the agent enough information to focus the response. The agent can’t determine whether you need operational status, maintenance history, specifications, or location information, which can result in irrelevant or incomplete responses.

Provide context

Context helps agents focus on relevant data by narrowing the scope of your request.
Show me the temperature trends.
Without knowing which equipment, sensor, or time period to analyze, this prompt leaves the agent searching through irrelevant data. You might receive temperature trends for the wrong equipment or an unhelpful time range.

Define the desired output

Specify the format, structure, and level of detail you need in the agent response.
Give me maintenance data for Valve V-789.
This prompt doesn’t tell the agent whether you need maintenance history, upcoming schedules, or cost data. It also doesn’t specify a time frame or how to structure the response, which can lead to unhelpful or unfocused results.

Understanding agent responses

When agents produce unexpected results, Atlas AI provides insight into their decision-making process to help you refine your prompts.

Analyzing the reasoning field

Atlas AI displays agent decision-making in a reasoning field that shows how agents understand prompts, select tools, and process information. Use the reasoning field to understand agent behavior and refine your approach.
  • Check if agents follow your specified workflow or deviate from expected steps
  • Identify how agents interpret ambiguous terms or instructions
  • Review why agents select specific tools at decision points
  • Observe what information guides agent responses and decision-making
When the reasoning reveals gaps between your intent and the agent’s interpretation, adjust your prompt to provide clearer direction or a more specific context.

Reviewing tool calls

Tool call inputs and outputs reveal how agents translate your prompts into actions. Review tool calls to understand how agents execute your requests.
  • Verify that all required parameters are included and formatted correctly
  • Confirm the agent selected appropriate tools for your request
  • Observe how agents interpret and use the tool results in their responses
  • Check how agents handle tool failures or unexpected outputs
When tool calls don’t match your expectations, make your prompt more explicit about the data, parameters, or actions you need.
Last modified on January 27, 2026