Hopp til hovedinnhold

Upload point cloud

informasjon

With large point clouds, we don't recommend using the Upload 3D Models feature in Cognite Data Fusion (CDF). This is due to longer upload times and potential file transfer failures.

To upload point clouds, the new approach removes the need to compress individual files into a single zip file. You only need to specify the folder or directory containing the files.

You don't need to compress individual files into a single zip file when uploading a point cloud. You only need to specify the folder.

To create a new model:

  1. Go to CDF > Data management > Configure > 3D.

  2. Select New > Model and follow the wizard to create a new model. In the wizard:

  • Select Create empty model to create a model without uploading an initial file.

  • Select Upload new model. You can upload a new file or choose an existing file to process it to create a model.

  1. When a new model is created, select Trigger the point cloud filtering and indexing.

Prerequisites

Install Python and Python SDK in your environment before you get started. To download and install Python, see Download Python.

To install the Python Cognite API, use:

pip install cognite-sdk

If you have multiple Python versions installed, to specify the version, use:

py -version -m pip install cognite-sdk/  

Replace version with the specific Python version.

Create and implement

Copy the code into a local Python file. The script will upload all the files, and generate a file containing the IDs of the point cloud files in JSON format. You can use the JSON file's file ID in the Point cloud filtering and indexing dialog. The script will automatically trigger a browser pop-up for authentication.

informasjon

It'll take a significant amount of time to complete the uploading process. Don't interrupt or shut down the computer until the process is completed.

Use the Python script below to implement the solution.

from cognite.client import global_config, ClientConfig, CogniteClient
from cognite.client.credentials import OAuthInteractive
import json

# Note: Input the details for your environment
# ******************************************************

TENANT_ID = '<TENANT_ID>'
CLIENT_ID = '<CLIENT_ID>'
CDF_CLUSTER = '<CDF_CLUSTER>'
COGNITE_PROJECT = '<PROJECT NAME>'
DIRECTORY_NAME = "<Specify the directory containing all point cloud files>"
AUTHORITY_URL =f"https://login.microsoftonline.com/{TENANT_ID}"

# ******************************************************

BASE_URL = f"https://{CDF_CLUSTER}.cognitedata.com"

# Using OAuth in Azure Entra Id
tokenProvider = OAuthInteractive(
authority_url=AUTHORITY_URL,
client_id=CLIENT_ID,
scopes= [f'{BASE_URL}/.default']
)

clientConfig = ClientConfig(
client_name="upload point cloud script",
project=COGNITE_PROJECT,
credentials=tokenProvider,
base_url=BASE_URL,
timeout=300,
)

global_config.default_client_config = clientConfig
client = CogniteClient()

# A directory with point cloud files will be uploaded as individual files
print(f"Uploading files from {DIRECTORY_NAME} .... (Uploading is in progress; it can take hours)")
directory_upload_result = client.files.upload(DIRECTORY_NAME, recursive=True)
print(f"Number of files uploaded: {len(directory_upload_result)}")

# Create and upload file_id list json file
file_ids = [file.id for file in directory_upload_result]
file_content = {"dataType": "PointCloud", "fileIds": file_ids}
file_name = "Result.json"
print(f"Uploading {file_name} ....")
with open(file_name, 'w') as outfile:
json.dump(file_content, outfile)

file_upload_result = client.files.upload(file_name)

# Print the result
print("Summary:")
print(f"File id : {file_upload_result.id}")
print(f"File name : {file_name}")