Hoppa till huvudinnehållet

Setup and administration for PostgreSQL Gateway 2.0

NOTE

This section describes how to set up a PostgreSQL gateway 2.0 instance. See the migration guide if you're migrating from a previous version.

Use the PostgreSQL gateway to ingest data into Cognite Data Fusion (CDF) from popular ETL tools. This service provides a PostgreSQL interface to CDF and the CDF resource types appear as tables in the PostgreSQL database. You can ingest data directly into CDF resources, like assets, events, and datapoints and to CDF RAW.

When should you use the PostgreSQL gateway?

Consider using the PostgreSQL gateway if:

  • You're integrating a new data source that can be accessed through Azure Data Factory or other ETL tool that supports writing to PostgreSQL. The PostgreSQL gateway can be used as a sink in ADF.
  • You have a previously built an extractor that can push data to PostgreSQL, but not to CDF.

Consider other solutions if:

  • You need very high performance especially for ingestion of RAW rows or time series datapoints, in the order of tens of thousands rows/points per second (10-50k/s as a ballpark figure).
NOTE
  • Microsoft's Azure Data Factory is the officially supported ETL tool.
  • PostgreSQL gateway is only intended for ingestion. Querying data from CDF for analysis and visualization is not supported.

This article explains how a Microsoft Entra ID administrator can control access to ingesting data into CDF from your existing ETL tools, such as Azure Data Factory, that support writing to PostgreSQL, using the CDF PostgreSQL gateway.

When you have registered the PostgreSQL gateway, users can sign in with their organizational ID to integrate data in a CDF project.

NOTE

To perform the steps below, you need to be an administrator of Microsoft Entra ID.

Before you start

Make sure you have registered the Cognite API and the CDF application in Microsoft Entra ID and set up Microsoft Entra ID and CDF groups to control access to CDF data.

Step 1: Register an app in Microsoft Entra ID to use with PostgreSQL gateway

  1. Sign in to the Azure portal as an admin.

  2. If you have access to multiple tenants, use the Directory + subscription filter in the top menu to select the tenant in which you want to register an application.

  3. Search for and select Microsoft Entra ID.

  4. Under Manage, select App registrations > New registrations.

  5. In the Register an application window, enter the app name, and then select Register.

  6. Copy and make a note of the Application (client) ID. This value is required to create user credentials for the PostgreSQL gateway.

  7. Under Manage, select Certificates & secrets > New client secret.

  8. Enter a client secret description and an expiry time, and then select Add.

  9. Copy and make a note of the client secret in the Value field.

    NOTE

    Make sure you copy this value now. This value will be hidden after you leave this page.

Step 2: Create a group in Microsoft Entra ID and add the registered app as its member

  1. Open the overview window in Microsoft Entra ID and select Manage > Groups.

  2. Create a group, read more here.

  3. Open the group. Under Manage, select Members > Add members.

  4. Find the app you created above and click Select.

  5. Return to the overview, and then copy and make a note of the Object Id.

  1. Sign in to CDF as an admin and naviate to Admin > Groups > Create new group.
  1. In the Create a new group window, enter a unique name for the group.

  2. Add relevant capabilities. The minimum required capabilities are:

    • postgresGateway:READ
    • postgresGateway:WRITE
    • project:list
    • groups:list
    • session:CREATE

    The created user will also need read and/or write capabilities for the CDF resources you want to ingest data into. For instance, if you're ingesting assets, add asset:read and asset:write.

    info

    If you revoke the capabilities in the CDF group, you also revoke access for the PostgreSQL gateway.

  3. Link the CDF group to a Microsoft Entra ID (ME-ID) group:

    1. In the Source ID field, enter the Object Id for the ME-ID group exactly as it exists in ME-ID.

    2. In the Source name field, enter the name of the group in Microsoft Entra ID.

  4. Select Create.

Step 4: Create user credentials for the PostgreSQL gateway

  1. Send a POST request to https://{cluster}.cognitedata.com/api/v1/projects/{project}/postgresgateway API.
TIP

The API is currently in beta testing, and the header must include cdf-version: beta.

WHERE
  • cluster is where your CDF instance exists. If you don't know the cluster name, contact Cognite support.

  • project is the organization name of your CDF instance.

  1. Get a Bearer token from your identity provider for authentication. To get a bearer token from Postman, see the illustration below.

  2. Get the sessions nonce using the Cognite API using the Bearer token obtained from above.

CAUTION

nonce will expire after 1 minute, and must be used immediately.

  1. Provide the nonce to the API endpoint:
{
"items": [
{
"credentials": {
"nonce": "<SESSION NONCE>"
}
}
]
}

The service returns, among other things, a username and password with the same CDF capabilities as the group you created in CDF. Make sure to keep these, as they are required when you set up the gateway connection in your ETL tool.

CAUTION

Make sure you copy the username and password. If you lose the password, it can't be recovered.

Migrate to PostgreSQL gateway 2.0

Postgres Gateway 2.0 replaces the previous version (1.0).

As indicated in Step 4 above, this version uses sessions on behalf of your project and OIDC authentication to make requests to CDF.

Most of the tables remain the same, but may have columns added or renamed. Some tables have been changed or divided up for clarity.

These modifications are highlighted below:

Assets

Table name: assets

Added

ColumnsType
created_timeTIMESTAMPTZ
last_updated_timeTIMESTAMPTZ

Event

Table name: events

Added

ColumnsType
created_timeTIMESTAMPTZ
last_updated_timeTIMESTAMPTZ

Extraction pipeline runs

Table name: extraction_pipeline_runs

Added

ColumnsType
idBIGINT

Extraction pipelines

Table name: extraction_pipelines

Added

ColumnsType
contactsJSON
raw_tablesJSON
documentationVARCHAR(10000)
last_updated_timeTIMESTAMPTZ
created_byVARCHAR(255)

Files

Table name: files

Added

ColumnsType
directoryVARCHAR(512)
asset_idsBIGINT[]
source_created_timeTIMESTAMPTZ
source_modified_timeTIMESTAMPTZ
security_categoriesBIGINT[]
created_timeTIMESTAMPTZ
last_updated_timeTIMESTAMPTZ

Labels

Table name: labels

Added

ColumnsType
data_set_idBIGINT[]
created_timeTIMESTAMPTZ

Sequences

Table name: sequences

Added

ColumnsType
created_timeTIMESTAMPTZ

Sequence rows

Table name: sequence_rows

Changed

OldNew
row_valuesvaluesJSONJSON

Datapoints

The datapoints table has been divided into 2 different tables because version 1.0 only supported double datapoints. These are string_datapoints and double_datapoints.

It includes the following columns:

ColumnsType
valueVARCHAR(255) for string_datapoints
DOUBLE PRECISION for double_datapoints
timestampTIMESTAMPTZ
external_idVARCHAR(255)
idBIGINT

Timeseries

The timeseries table has been renamed to time_series.

Removed

The legacy_name field has been retired and is no longer available for use.

Raw

The raw table is renamed to raw_rows for better understanding and proper differentiation between raw table data sets and raw table names.

raw_rows

Added

ColumnsType
_raw_last_updated_timeTIMESTAMPTZ

Changed

ColumnsType
database_raw_databaseVARCHARVARCHAR(32)
table_raw_tableVARCHARVARCHAR(64)
key_raw_keyVARCHAR(1024)TEXT
columns_raw_columnsJSONJSON

raw_tables

Added

ColumnsType
_raw_databaseVARCHAR(32)
_raw_tableVARCHAR(64)