Hopp til hovedinnhold

Troubleshooting

The simulation doesn't start on schedule

Check the connector status and ensure it's running. Verify that the routine's schedule is correctly configured and enabled. Check simulator logs for errors or warnings.

Simulation fails due to invalid input data

Review your data validation settings. Try adjusting the validation window or modify the logical checks to ensure valid data is available for the simulation. If simulations consistently fail to find valid conditions, consider relaxing the validation criteria. Review historical data to understand typical process behavior and adjust validation settings accordingly.

Improve performance

Consider optimizing your routines, distributing load across multiple connectors, and ensuring your simulation models are as efficient as possible.

Most commercial simulators rely on local or server licenses to run simulations. The machine hosting the simulator must have access to a valid license. If licenses are shared across a network, make sure there are sufficient licenses available for connectors to use when triggering simulations. Consider providing dedicated licenses for simulation connectors to ensure stable operations.

Simulations failing with "No data found for Input Time Series"

When using input time series, ensure reliable data availability. Select an appropriate aggregate function depending on the type of process sensor. Consider disabling data sampling to use the latest datapoints if no up-to-date data is available.

Scheduled simulations are being skipped

If scheduled simulations are missing, this could be due to connector downtime during the scheduled execution time. Connectors only execute future schedules, so any simulations scheduled during connector downtime (maintenance, restarts, etc.) will be skipped and not retroactively executed.

For high-availability requirements, consider these alternatives:

  1. Use Cognite Functions or Data workflows for scheduling instead of the connector's native scheduling mechanism. These cloud-based scheduling features will:
  • Track scheduled runs even when the connector is temporarily offline. This applies when simulation runs are triggered with queue equal to true.
  • Allow to edit the schedule configuration without creating a new routine revision.
  1. Implement a recovery mechanism using Cognite Functions to:
  • Detect skipped simulation runs.
  • Trigger retrospective simulations using the Simulators API by setting runTime to the skipped simulation schedule timestamp.

HTTP 429 (Too many requests) response status code

Simulator integration endpoints return the HTTP 429 (too many requests) response status code when the back-end capacity limit is exceeded. Throttling can happen if a user (a person or a machine) sends more than the allocated number of requests in a given amount of time. The limit is 1000 requests per minute for the Simulators API.

Make sure all automations that send requests to the Cognite API don't exceed this rate limit.

Recommendations:

  1. Apply limits: Apply limits to your queries to avoid fetching too much data at once.
  2. Spread out your requests: If you have a large number of queries, consider breaking them into smaller chunks and spreading them over a period of time. This approach to data fetching is closer to "streaming" rather than "batch".
  3. Use different data storage for frequently accessed data: If you have simulation results from multiple simulations and want to display these in a dashboard, consider storing the aggregated data in a materialized view and querying that instead. For example, this can be a time series, or a specifically designed data model.

While direct consumption from the Simulators API is possible, users must consider the 1000 requests per minute limit. For use cases that may exceed this limit, we recommend materializing the data into a consumption-optimized storage.

You can implement a background job using Cognite Functions and/or Data workflows to efficiently stream data from the Simulators API to your desired materialized storage within CDF. This approach ensures:

  • Efficient data access for high-throughput applications
  • Respect for API rate limits through controlled data streaming
  • Optimized storage for frequent data retrieval

Cognite recommends using a retry strategy based on truncated exponential backoff with jitter to handle sessions with HTTP response code 429.

See more about request throttling.