Skip to main content
Legacy data modeling
This resource is part of the asset-centric data model.
  • New projects: We recommend using the data modeling service for greater flexibility and performance.
  • Existing projects: This resource remains fully supported for maintaining legacy applications.
Looking for datapoint operations? See Time series and datapoints for inserting, retrieving, and querying time series data.This page focuses on time series metadata management for asset-centric projects.
The time series API lets you create, update, search, and manage time series metadata in asset-centric projects. For data modeling projects, use the Instances API to manage time series as instances in your data model. A time series indexes a series of data points in time order. Examples include the temperature of a water pump asset, monthly precipitation in a location, and the daily average number of manufacturing defects. An asset can have several time series connected to it. For example, a water pump asset can have time series that measure pump temperature, pressure, rpm, flow volume, power consumption, and more. Each time series has a unique id generated at creation. You can specify an externalId to control the identifier, which must be unique within a project. Time series can also have metadata key-value fields and labels for organization and categorization.

Count time series matching filter criteria

Count the number of time series that match selected filtering criteria, such as being part of a specific data set or following a naming convention for externalId.
POST /api/v1/projects/daitya/timeseries/aggregate HTTP/1.1
content-type: application/json

{
  "filter": {
    "dataSetIds": [
      {
        "externalId": "Cognite data quality monitoring alerts and metrics"
      }
    ],
    "externalIdPrefix": "dq_monitor"
  }
}
The response will look similar to this:
{
  "items": [
    {
      "count": 273
    }
  ]
}

Best practices

Use these tips to increase throughput and query performance.

Message size and batching

Send many data points from the same time series in the same request. If you have room for more data points in the request, you can add more time series. Preferred batch sizes are up to 100k numeric data points or 10-100k string data points, depending on the string length (around 1 MB is good). For each time series, group the data points in time. Ideally, different requests shouldn’t have overlapping time series ranges. If data changes, update existing ranges.

Error handling

For general guidance on handling 429 Too Many Requests responses, exponential backoff, and retry strategies, see API rate limits. If you receive repeated 500 Internal Server Error responses, you may have found a bug. Contact support@cognite.com and include the request ID from the error message.

Retries and idempotency

Most API endpoints, including all datapoint requests, are idempotent. You can send the same request several times without any effect beyond the first request. When you modify or delete time series, subsequent requests can fail harmlessly if the references have become invalid. The only endpoint that’s not idempotent is creating a time series without externalId. For each successful request, you generate a new time series. The recommendation is to always use externalId on your time series.
CDF always applies the complete request or nothing at all. A 200 response indicates that the complete request was applied.
Last modified on April 23, 2026