With CDF Transformations, you can use Spark SQL queries to transform data from the CDF staging area, RAW, into the CDF data model. CDF Transformations is an integrated part of CDF, and you can run it in your browser.
If you cannot implement your transformation logic in SQL, use a tool other than CDF Transformations that enables programmatic data transformations, for example Databricks.
In this article:
You need to be a member of the
transformations group to access and use CDF Transformations. Contact your project administrator if you need access.
To open CDF Transformations, navigate to the Cognite Console and select Transformations in the sidebar.
In the overview window you can see all transformations that you have access to.
To edit a transformation, click the name of the transformation.
To create a new transformation, click New transformation.
To make a copy of an existing transformation, click duplicate.
Create and run a transformation
To create and run a transformation with CDF Transformations:
Navigate to the Cognite Console and select Transformations in the sidebar.
In the Transformations window, select New transformation.
The new transformation opens in the transformation editor.
On the Recipe tab, give the transformation a name and select the desired destination resource type.
In the SQL editor, specify the Spark SQL query to select the RAW data you want to transform and to specify how it should be transformed.
See Writing SQL queries for tips on how to read data from RAW tables and other CDF resource types.
You can configure the preview limit to change the maximum number of rows to read from data sources when previewing the results of your query. Note that even if you set this to All, there is still a final limit of 10,000 rows that will be displayed in the results view.
Click Run query to preview the query results.
Use the query results to verify that the transformation produces the expected output, and adjust the SQL query if necessary.
Above the preview table, you can see the columns that make up the schema for the chosen destination resource type. To see the type of a column, hover over it.
NOTE: Columns that are
nullablemay or may not be required by the destination schema. If you're in doubt, check the API reference documentation for the relevant resource type.
You will also see any source columns that don't exist in the destination schema, and columns that have the wrong type in the destination schema.
You can also preview RAW tables directly in the recipe editor by clicking the button pictured on the right. Tables will be opened in a new tab so you don't lose your query results.
When you have verified that the transformation works the way you want, switch to the Transform tab to complete the configuration of the transformation.
On the Transform tab, specify the API keys that the transformation should use to authenticate when reading and writing data. Having separate API keys for reading and writing allows you to transform data between different projects.
If your destination resource type is RAW, specify the RAW database and table you want to write to.
NOTE: You must create the RAW database and table before you can run the transformation. This can be done for example with the RAW Explorer.
Click the Transform button to manually start a new transformation job, or follow the steps in Schedule transformations below to schedule your transformation to run at regular intervals.
Under the Transform tab, you can specify a schedule that determines when and how often the transformation should run.
Schedules are specified as cron expressions. For example,
45 23 * * *will run the transformation at 23:45 (11:45 PM) every day.
Click Add schedule to activate the schedule. When a transformation is scheduled, it becomes read-only to prevent unintentional changes to future scheduled jobs.
You can share transformations with other users in your project to allow them to edit, run, and schedule the transformation.
When you share a transformation, you implicitly grant the permissions defined for the service accounts that are configured for the transformation to other users in your project. Make sure you use service accounts with only the minimum permissions they need to complete transformations.
Writing SQL queries
The information in this section helps you efficiently query data from RAW tables and CDF resource types, and explains how you can load data incrementally.
Read data from CDF
From a RAW table
To select data from a RAW table, use the syntax
select * from mydb.mytable
If your database or table name contains special characters, enclose the name in backticks, for example
From other CDF resource types
To select other CDF resource types, use the syntax
select * from _cdf.events
The supported resource types are:
Load data incrementally
When reading from RAW tables, you probably want to transform only the data that has changed since the last transformation job ran.
To achieve this, you can filter on the
lastUpdatedTime column to query for the rows that have changed after a specific timestamp.
When filtering on
lastUpdatedTime, the filter is pushed down to the RAW service itself, so this query can be performed efficiently.
select * from mydb.mytable where lastUpdatedTime > to_timestamp(123456).
Instead of encoding the timestamp directly in the query and manually keep it up to date every time new data has been processed, you can use the
is_new function. This function returns
true when a row has changed since the last time the transformation was run, and
The first time you run a transformation using the query below, all the rows of
mytable will be processed:
select * from mydb.mytable where is_new("mydb_mytable", lastUpdatedTime)
If the transformation completes successfully, the second run will only process rows that have changed since the first run.
If the transformation fails,
is_new filters the same rows the next time the transformation is run. This ensures that there is no data loss in the transformation from source to destination.
Incremental load is disabled when previewing query results. That is,
is_new will always return
true for all rows.
is_new filter is identified by a name (for example
"mydb_mytable"), and can be set to any constant string. This allows you to differentiate between multiple calls to
is_new in the same query, and use
is_new to filter on multiple tables. To easily identify the different filters, we recommend that you use the name of the table as the name of the
To process all the data even if it hasn't changed since the last transformation, change the name of the
is_new filter, for example by adding a postfix with an incrementing number (e.g.
This is especially useful when the logic of the query changes, and data that has already been imported needs to be updated accordingly.
Compatibility with v0.5
In Cognite API v0.5, the
name of a time series must be unique. In API v1, you can have multiple time series with the same
externalId acts as the unique identifier that can be controlled by the user. For applications that still rely on the API v0.5 behavior, we automatically set the
legacyName of new time series to the same value as
Custom SQL functions
In addition to the built-in Spark SQL functions, we also provide a set of custom SQL functions to help you write efficient transformations.
When a function expects
var_args, it allows a variable number of arguments of any type, including star
Returns an array of the field names of a struct or row.
select get_names(*) from mydb.mytable -- Returns the column names of 'mydb.mytable'
select get_names(some_struct.*) from mydb.mytable -- Returns the field names of 'some_struct'
Casts the arguments to an array of strings. It handles array, struct and map types by casting it to JSON strings.
select cast_to_strings(*) from mydb.mytable -- Returns the values of all columns in 'mydb.mytable' as strings
to_metadata(var_args): Map[String, String]
Creates metadata compatible type from the arguments. In practice it does
map_from_arrays(get_names(var_args), cast_to_strings(var_args)). Use this function when you want to transform your columns or structures into a format that fits the metadata field in CDF.
select to_metadata(*) from mydb.mytable -- Creates a metadata structure from all the columns found in 'mydb.mytable'
to_metadata_except(excludeFilter: Array[String], var_args)
Returns a metadata structure (
Map[String, String]) where strings found in
excludeFilter will exclude keys from
Use this function when you want to put most, but not all, columns into metadata, for example
select to_metadata_except(array("myCol"), myCol, testCol) from mydb.mytable -- Creates a map where myCol is filtered out. -- The result in this case will be Map("testCol" -> testCol.value.toString)
asset_ids(assetNames: Array[String], rootAssetName: String): Array[BigInt]
Attempts to find Asset names in the asset hierarchy which have
rootAssetName as their root Asset. The function returns the IDs of the assets matched.
See Assets for more information about assets in CDF.
The entire job will be aborted if
asset_ids() did not find any matching assets.
select asset_ids(array("PV10"), "MyBoat")
is_new(name: String, version: Timestamp)
Returns true if the version provided is higher than the version found with the specified name, based on the last time the transformation was run. See Load data incrementally.
select * from mydb.mytable where is_new("mydb_mytable_version", lastUpdatedTime) -- Returns only rows that have changed since the last successful run