Use these practices to keep transformations reliable, efficient, and aligned with your data modeling goals. The sections focus on orchestration, transformation scope, and data destinations.Documentation Index
Fetch the complete documentation index at: https://docs.cognite.com/llms.txt
Use this file to discover all available pages before exploring further.
Orchestrate with Data workflows instead of schedules
Prefer Data workflows over time-based schedules when you have dependencies, retries, or multiple transformations that must run in a specific order. Data workflows support triggers and give you execution history for monitoring and debugging. Data workflows distribute load across the project, run tasks only when prerequisites succeed, and provide built-in retries. They make dependencies explicit and eliminate schedule buffers between dependent jobs.Keep transformations focused by resource type
Aim for one transformation per resource type to simplify schema management and keep memory usage predictable. For example, create separate transformations for assets, time series, and relationships.Choose destinations deliberately
Evaluate data type, volume, latency, and consumer needs before choosing a target. General guidance:- Data models when you need relationships, governance, and reuse across apps. Prefer the Cognite Core Data Model for standard resources and integration patterns.
- Time series, events, or sequences for high-volume, low-latency data that should be written directly to resource types.
- Sequences for large, spreadsheet-like datasets where consumers read specific columns or slices.
- Files for large payloads or formats that do not fit RAW row and column limits.
- Do not default to RAW as a final destination.
is_new() on the primary source and join in related data to enrich new entries, as recommended in the transformation SQL guidance.
For a concrete example of populating the Cognite Core Data Model with transformations, see Build an asset hierarchy.