Skip to main content
Works with both data modeling and asset-centric projects.
Your project typeHow to identify time series
Data modelingUse instanceId — an object with space and externalId
Asset-centric (legacy)Use externalId (string) or id (integer)
See Time series in data modeling for the full DM workflow.
A time series indexes a series of data points in time order. Examples include the temperature of a water pump asset, monthly precipitation in a location, and the daily average number of manufacturing defects. Time series can be analyzed and visualized to draw inferences from the data, identify trends, seasonal movements, and random fluctuations. Other common uses include forecasting future values for scheduling maintenance, and adjusting parameters to optimize equipment performance.

Data points

A data point is a piece of information associated with a specific time, stored as a numerical, string, or state value. Timestamps are defined in milliseconds in Unix Epoch time. Fractional milliseconds are not supported, and leap seconds are not counted.

Time series types

Time series in CDF support these types:
  • Numeric data points can be aggregated to reduce the amount of data transferred in query responses and improve performance. You can specify one or more aggregates (for example, average, minimum, maximum) and the time granularity (for example, 1h for one hour). See Aggregating time series data for details on available aggregation functions.
  • String data points store arbitrary information like states (for example, open or closed) or more complex information in JSON format. CDF cannot aggregate string data points.
  • State data points (Private Beta) represent discrete operational states of equipment with predefined valid states and specialized aggregations (duration, transitions, count per state). See State time series for more information and State time series aggregates for supported aggregations.
CDF stores discrete data points, but the underlying process measured by the data points can vary continuously. To interpolate between data points, use the isStep flag on the time series to assume that each value stays the same until the next measurement (isStep), or that it linearly changes between the two measurements (not isStep). Each data point can have a status code that describes its quality. There are three categories: Good, Uncertain, and Bad. By default, only Good data points are returned, but you can opt in to receive all data points. For more information, see Status codes.

Identifying time series

You can identify time series using three different methods depending on your project type:
MethodUse caseFormat
instanceIdData modeling projectsObject with space and externalId
externalIdAsset-centric projects or legacy integrationsString identifier
idAsset-centric projects (internal ID)Integer
{
  "items": [
    {
      "instanceId": {
        "space": "my_space",
        "externalId": "pump_temperature"
      },
      "limit": 100
    }
  ]
}

Creating time series

The approach to creating time series depends on your project type:

Data modeling projects

Create time series as instances in your data model using the Instances API. The time series must reference a view that includes the CogniteTimeSeries type from the core data model. See Time series in data modeling for detailed instructions.

Asset-centric projects

Create time series metadata using the Time series metadata API. This creates time series that can be referenced using externalId or id.

Retrieving data points

You can retrieve data points from a time series by referencing it using any of the identification methods above.
POST /api/v1/projects/my-project/timeseries/data/list
Content-Type: application/json

{
  "items": [
    {
      "limit": 5,
      "instanceId": {
        "space": "my_space",
        "externalId": "outside-temperature"
      }
    }
  ]
}
The response will look similar to this:
{
  "items": [
    {
      "isString": false,
      "instanceId": {
        "space": "my_space",
        "externalId": "outside-temperature"
      },
      "datapoints": [
        {
          "timestamp": 1349732232902,
          "value": 31.62889862060547
        },
        {
          "timestamp": 1349732244888,
          "value": 31.59380340576172
        },
        {
          "timestamp": 1349732245888,
          "value": 31.62889862060547
        },
        {
          "timestamp": 1349732258888,
          "value": 31.59380340576172
        },
        {
          "timestamp": 1349732259888,
          "value": 31.769287109375
        }
      ],
      "nextCursor": "wpnaLqNvdkOrsPd"
    }
  ]
}
POST /api/v1/projects/publicdata/timeseries/data/list
Content-Type: application/json

{
  "items": [
    {
      "limit": 5,
      "externalId": "outside-temperature"
    }
  ]
}
The response will look similar to this:
{
  "items": [
    {
      "isString": false,
      "id": 44435358976768,
      "externalId": "outside-temperature",
      "datapoints": [
        {
          "timestamp": 1349732232902,
          "value": 31.62889862060547
        },
        {
          "timestamp": 1349732244888,
          "value": 31.59380340576172
        },
        {
          "timestamp": 1349732245888,
          "value": 31.62889862060547
        },
        {
          "timestamp": 1349732258888,
          "value": 31.59380340576172
        },
        {
          "timestamp": 1349732259888,
          "value": 31.769287109375
        }
      ],
      "nextCursor": "wpnaLqNvdkOrsPd"
    }
  ]
}

Aggregate values

To visualize or analyze a longer period, you can extract aggregate values between two points in time. Both DM and asset-centric projects use the same aggregation functions and granularities.
To return the hourly average aggregate with a granularity of 1 hour, for the last 5 hours:
POST /api/v1/projects/my-project/timeseries/data/list
Content-Type: application/json

{
  "items": [
    {
      "limit": 5,
      "instanceId": {
        "space": "my_space",
        "externalId": "outside-temperature"
      },
      "aggregates": ["average"],
      "granularity": "1h",
      "start": 1541424400000,
      "end": "now"
    }
  ]
}
To return the hourly average aggregate with a granularity of 1 hour, for the last 5 hours:
POST /api/v1/projects/publicdata/timeseries/data/list
Content-Type: application/json

{
  "items": [
    {
      "limit": 5,
      "externalId": "outside-temperature",
      "aggregates": ["average"],
      "granularity": "1h",
      "start": 1541424400000,
      "end": "now"
    }
  ]
}

Best practices

Use these tips to increase throughput and query performance.

Message size and batching

Send many data points from the same time series in the same request. If you have room for more data points in the request, you can add more time series. Preferred batch sizes are up to 100k numeric data points or 10-100k string data points, depending on the string length (around 1 MB is good). For each time series, group the data points in time. Ideally, different requests shouldn’t have overlapping time series ranges. If data changes, update existing ranges.

Error handling

For general guidance on handling 429 Too Many Requests responses, exponential backoff, and retry strategies, see API rate limits. If you receive repeated 500 Internal Server Error responses, you may have found a bug. Contact support@cognite.com and include the request ID from the error message.

Retries and idempotency

Most API endpoints, including all datapoint requests, are idempotent. You can send the same request several times without any effect beyond the first request. When you modify or delete time series, subsequent requests can fail harmlessly if the references have become invalid.
CDF always applies the complete request or nothing at all. A 200 response indicates that the complete request was applied.
Last modified on April 23, 2026