Works with both data modeling and asset-centric projects.
See Time series in data modeling for the full DM workflow.
| Your project type | How to identify time series |
|---|---|
| Data modeling | Use instanceId — an object with space and externalId |
| Asset-centric (legacy) | Use externalId (string) or id (integer) |
Data points
A data point is a piece of information associated with a specific time, stored as a numerical, string, or state value. Timestamps are defined in milliseconds in Unix Epoch time. Fractional milliseconds are not supported, and leap seconds are not counted.Time series types
Time series in CDF support these types:-
Numeric data points can be aggregated to reduce the amount of data transferred in query responses and improve performance. You can specify one or more aggregates (for example,
average,minimum,maximum) and the time granularity (for example,1hfor one hour). See Aggregating time series data for details on available aggregation functions. -
String data points store arbitrary information like states (for example,
openorclosed) or more complex information in JSON format. CDF cannot aggregate string data points. - State data points (Private Beta) represent discrete operational states of equipment with predefined valid states and specialized aggregations (duration, transitions, count per state). See State time series for more information and State time series aggregates for supported aggregations.
isStep flag on the time series to assume that each value stays the same until the next measurement (isStep), or that it linearly changes between the two measurements (not isStep).
Each data point can have a status code that describes its quality. There are three categories: Good, Uncertain, and Bad. By default, only Good data points are returned, but you can opt in to receive all data points. For more information, see Status codes.
Identifying time series
You can identify time series using three different methods depending on your project type:| Method | Use case | Format |
|---|---|---|
instanceId | Data modeling projects | Object with space and externalId |
externalId | Asset-centric projects or legacy integrations | String identifier |
id | Asset-centric projects (internal ID) | Integer |
- Data modeling (instanceId)
- Asset-centric (externalId)
- Asset-centric (id)
Creating time series
The approach to creating time series depends on your project type:Data modeling projects
Create time series as instances in your data model using the Instances API. The time series must reference a view that includes theCogniteTimeSeries type from the core data model.
See Time series in data modeling for detailed instructions.
Asset-centric projects
Create time series metadata using the Time series metadata API. This creates time series that can be referenced usingexternalId or id.
Retrieving data points
You can retrieve data points from a time series by referencing it using any of the identification methods above.Example: retrieve data points (DM)
Example: retrieve data points (DM)
Example: retrieve data points (asset-centric)
Example: retrieve data points (asset-centric)
Aggregate values
To visualize or analyze a longer period, you can extract aggregate values between two points in time. Both DM and asset-centric projects use the same aggregation functions and granularities.Example: hourly average aggregates (DM)
Example: hourly average aggregates (DM)
To return the hourly average aggregate with a granularity of 1 hour, for the last 5 hours:
Example: hourly average aggregates (asset-centric)
Example: hourly average aggregates (asset-centric)
To return the hourly average aggregate with a granularity of 1 hour, for the last 5 hours:
Best practices
Use these tips to increase throughput and query performance.Message size and batching
Send many data points from the same time series in the same request. If you have room for more data points in the request, you can add more time series. Preferred batch sizes are up to 100k numeric data points or 10-100k string data points, depending on the string length (around 1 MB is good). For each time series, group the data points in time. Ideally, different requests shouldn’t have overlapping time series ranges. If data changes, update existing ranges.Error handling
For general guidance on handling429 Too Many Requests responses, exponential backoff, and retry strategies, see API rate limits.
If you receive repeated 500 Internal Server Error responses, you may have found a bug. Contact support@cognite.com and include the request ID from the error message.
Retries and idempotency
Most API endpoints, including all datapoint requests, are idempotent. You can send the same request several times without any effect beyond the first request. When you modify or delete time series, subsequent requests can fail harmlessly if the references have become invalid.CDF always applies the complete request or nothing at all. A 200 response indicates that the complete request was applied.