The way you contextualize 360° images depends on your project type.
Data modeling projects
Legacy and hybrid projects
In data modeling projects, use the 3D jobs API to get suggestions and then create contextualization approvals:
1
Get text detection suggestions
Run the 3D job through the 3D jobs API.Use the Create360ContextualizationJobRequest body: set job type to ImageTextDetection and source.type to Cognite360ImageCollection.
2
Create contextualization approvals
To connect 360° image annotations (text and polygons) to CogniteAsset instances, use the Contextualization API (beta).
For asset-centric (legacy) or hybrid projects, you can contextualize 360° images in two ways:
Use your own tooling to read 360° images, run text detection, and create contextualization approvals.
Use the Vision API to get text regions from your 360° images and then create annotations with the Annotations API.
The Vision API is deprecated. See Deprecated features in CDF for details. We recommend migrating to a data modeling project.
The following steps use the Vision API for text detection and the Annotations API to create approvals.
1
Run text detection
Use the Vision API text detection to get text regions from your 360° images.Trigger text detection with the Cognite Python SDK:
file_ids: List of CDF file IDs for your 360° images.
2
Match text to CogniteAssets
To link detections to core data model CogniteAssets, match the detected text to the right asset.
Exact matching
Retrieve the CogniteAssets of your interest, then match detected text to asset properties such as name or externalId using exact string match.
Fuzzy matching
Use fuzzy matching (for example, trigram similarity using Python’s difflib library) so that optical character recognition (OCR) issues still match the correct asset. Fuzzy matching helps when detected text has typos, spaces, or misread characters.
3
Create annotations
To contextualize 360° images with the matched text, create annotations with the Annotations API. Use the annotation type images.AssetLink (legacy) or images.InstanceLink (hybrid) in the request items → data. Each annotation links one 360° image (file) to one asset and can include the spatial region where the text was detected.Create annotations URL:https://{cluster}.cognitedata.com/api/v1/projects/{project}/annotationsExample request body (hybrid — images.InstanceLink):