const data = await client.datapoints.retrieve({ items: [{ id: 123 }] });{
"items": [
{
"id": 4503599627370496,
"isString": false,
"type": "<string>",
"isStep": true,
"datapoints": [
{
"timestamp": 1638795554528,
"count": 123,
"countGood": 123,
"countUncertain": 123,
"countBad": 123,
"durationGood": 123,
"durationUncertain": 123,
"durationBad": 123,
"average": 123,
"max": 123,
"maxDatapoint": {
"timestamp": 1638795554528,
"value": 123,
"status": {
"code": 123,
"symbol": "<string>"
}
},
"min": 123,
"minDatapoint": {
"timestamp": 1638795554528,
"value": 123,
"status": {
"code": 123,
"symbol": "<string>"
}
},
"sum": 123,
"interpolation": 123,
"stepInterpolation": 123,
"continuousVariance": 123,
"discreteVariance": 123,
"totalVariation": 123
}
],
"externalId": "<string>",
"instanceId": {
"space": "<string>",
"externalId": "<string>"
},
"unit": "<string>",
"unitExternalId": "<string>",
"nextCursor": "<string>"
}
]
}Required capabilities:
timeSeriesAcl:READ
Retrieves a list of data points from multiple time series in a project. This operation supports aggregation and pagination. Learn more about aggregation.
Note: when start isn’t specified in the top level and for an individual item, it will default to epoch 0, which is 1 January, 1970, thus
excluding potential existent data points before 1970. start needs to be specified as a negative number to get data points before 1970.
const data = await client.datapoints.retrieve({ items: [{ id: 123 }] });{
"items": [
{
"id": 4503599627370496,
"isString": false,
"type": "<string>",
"isStep": true,
"datapoints": [
{
"timestamp": 1638795554528,
"count": 123,
"countGood": 123,
"countUncertain": 123,
"countBad": 123,
"durationGood": 123,
"durationUncertain": 123,
"durationBad": 123,
"average": 123,
"max": 123,
"maxDatapoint": {
"timestamp": 1638795554528,
"value": 123,
"status": {
"code": 123,
"symbol": "<string>"
}
},
"min": 123,
"minDatapoint": {
"timestamp": 1638795554528,
"value": 123,
"status": {
"code": 123,
"symbol": "<string>"
}
},
"sum": 123,
"interpolation": 123,
"stepInterpolation": 123,
"continuousVariance": 123,
"discreteVariance": 123,
"totalVariation": 123
}
],
"externalId": "<string>",
"instanceId": {
"space": "<string>",
"externalId": "<string>"
},
"unit": "<string>",
"unitExternalId": "<string>",
"nextCursor": "<string>"
}
]
}Access token issued by the CDF project's configured identity provider. Access token must be an OpenID Connect token, and the project must be configured to accept OpenID Connect tokens. Use a header key of 'Authorization' with a value of 'Bearer $accesstoken'. The token can be obtained through any flow supported by the identity provider.
Specify parameters to query for multiple data points. If you omit fields in individual data point query items, the top-level field values are used. For example, you can specify a default limit for all items by setting the top-level limit field. If you request aggregates, only the aggregates are returned. If you don't request any aggregates, all data points are returned.
1 - 100 elementsParameters describing a query for data points.
Show child attributes
Get datapoints starting from, and including, this time. The format is N[timeunit]-ago where timeunit is w,d,h,m,s. Example: '2d-ago' gets datapoints that are up to 2 days old. You can also specify time in milliseconds since epoch. Note that for aggregates, the start time is rounded down to a whole granularity unit (in UTC timezone). Daily granularities (d) are rounded to 0:00 AM; hourly granularities (h) to the start of the hour, etc.
Get datapoints up to, but excluding, this point in time. Same format as for start. Note that when using aggregates, the end will be rounded up such that the last aggregate represents a full aggregation interval containing the original end, where the interval is the granularity unit times the granularity multiplier. For granularity 2d, the aggregation interval is 2 days, if end was originally 3 days after the start, it will be rounded to 4 days after the start.
Returns up to this number of data points. The maximum is 100000 non-aggregated data points and 10000 aggregated data points in total across all queries in a single request.
Specify the aggregates to return. Omit to return data points without aggregation.
1average, max, maxDatapoint, min, minDatapoint, count, sum, interpolation, stepInterpolation, totalVariation, continuousVariance, discreteVariance, countGood, countUncertain, countBad, durationGood, durationUncertain, durationBad The time granularity size and unit to aggregate over. Valid entries are 'month, day, hour, minute, second', or short forms 'mo, d, h, m, s', or a multiple of these indicated by a number as a prefix. For 'second' and 'minute', the multiple must be an integer between 1 and 120 inclusive; for 'hour', 'day', and 'month', the multiple must be an integer between 1 and 100000 inclusive. For example, a granularity '5m' means that aggregates are calculated over 5 minutes. This field is required if aggregates are specified.
"1h"
Defines whether to include the last data point before the requested time period and the first one after. This option can be useful for interpolating data. It's not available for aggregates or cursors.
Note: If there are more than limit data points in the time period, we will omit
the excess data points and then append the first data point after the time period,
thus causing a gap with omitted data points. When this is the case, we return
up to limit+2 data points.
When doing manual paging (sequentially requesting smaller intervals instead of
requesting a larger interval and using cursors to get all the data points) with this
field set to true, the start of the each subsequent request should be one
millisecond more than the timestamp of the second-to-last data point from the
previous response. This is because the last data point in most cases will be the
extra point from outside the interval.
For aggregates of granularity 'hour' and longer, which time zone should we align to. Align to the start of the hour, start of the day or start of the month. For time zones of type Region/Location, the aggregate duration can vary, typically due to daylight saving time. For time zones of type UTC+/-HH:MM, use increments of 15 minutes.
Note: Time zones with minute offsets (e.g. UTC+05:30 or Asia/Kolkata) may take longer to execute. Historical time zones, with offsets not multiples of 15 minutes, are not supported.
"Europe/Oslo or UTC+05:30"
Ignore IDs and external IDs that are not found
Lists of data points for the specified queries.
The list of responses. The order matches the requests order.
Show child attributes
Was this page helpful?