주요 내용으로 건너뛰기

Troubleshooting interactive diagram parsing

This article has troubleshooting tips to help you resolve the issues if you receive errors or see unexpected behavior related to interactive diagram parsing.

Known issues and solutions

Prerequisites

  1. Make sure your assets exist in CDF.

  2. Check for SHX text embedded in the diagram file. If present, set configuration.annotationExtract = True to override overlapping OCR text. For more information, see API documentation.

  3. Ensure the asset names (or any other fields in use) match the tags in the file. If they don't match, do the following:

    • Create an alias for the asset name to better match the tags in the file.

    • Set partialMatch and minTokens.

Tags are recognized incorrectly

  1. I is recognized as T. For example, PI/TI is recognized as PT/TT.

    Solution: PI/TI tags often don't exist in the asset hierarchy; if they do, PT/TT tags may be their closest relatives. By default, we correct I to T if misread. Use configuration substitutions to turn this off.

  2. Wildcard tags are being linked to multiple tags. Some tags have an "X" in the middle, representing digits 0-9. This kind of tag represents multiple tags.

    Solution: Add aliases to tags during diagram detection. For example, if 11-V-6x66 should link to both 11-V-6166 and 11-V-6666, add 11-V-6x66 as an alias.

    The entities field looks like this:

    {
    "entities": [
    {
    "name": ["11-V-6166", "11-V-6x66"]
    },
    {
    "name": ["11-V-6666", "11-V-6x66"]
    }
    ]
    }

  3. Only a subset of the tag is recognized; thus, it gets linked to an incorrect asset.

    Solution: Follow the steps in Tags are not recognized section to resolve this issue.

  4. A large bounding box containing characters spaced far apart results in incorrect detection.

    Solution: Set connectionFlags= [“natural_reading_order”, “no_text_inbetween”]. See API documentation for more information.

  5. Short false positives

    Solution: Set configuration.customizeFuzziness.minChars. See API documentation for more information.

Tags are not recognized

  1. Use the OCR endpoint to see the raw OCR detection results. For more information, see the existing OCR API endpoint. Currently, the SDK is not available.

Use the Python code below in the Cognite Jupyter Notebook.


from cognite.client import CogniteClient
from cognite.client.config import FusionNotebookConfig

client = CogniteClient(
FusionNotebookConfig(api_subversion="20230101-beta")
)

file_id = ... # put in your file id

from typing import Any
from cognite.client.data_classes.contextualization import DiagramConvertResults
def ocr(client, file_id: int, start_page: int = 1, limit: int = 50) -> list[dict[str, Any]]:
"""Get ocr text from a file that has been through diagram/detect before.

Args:
file_id (int): file id
start_page (int): First page to get ocr from.
limit (int): The maximum number of pages to get ocr from.
Returns:
list[dict[str, Any]]: List of OCR results per page.
"""

response = client.diagrams._camel_post(
"/ocr",
json={"file_id": file_id, "start_page": start_page, "limit": limit},
)
items = response.json()["items"]
assert isinstance(items, list)
return items

def ocr_annotation_to_detect_annotation(ocr_annotation: dict[str, any]) -> dict[str, any]:
bounding_box = ocr_annotation["boundingBox"]
vertices = [
{"x": x, "y": y}
for x in [bounding_box["xMin"], bounding_box["xMax"]]
for y in [bounding_box["yMin"], bounding_box["yMax"]]
]
return {"text": ocr_annotation["text"], "region": {"shape": "rectangle", "page": 1, "vertices": vertices}}

def create_ocr_svg(client, file_id: int):
"""
Get OCR text for a single-page PDF and create an SVG that overlays it as rectangles on top of a raster image
Args:
file_id (int): The file ID of the file used to create an OCR SVG.
Returns svg_link

"""

# Verify one page, and also make sure OCR exists.
detect_job = client.diagrams.detect(
[{"name": "dummy"}], file_references=FileReference(file_id=file_id, first_page=1, last_page=1)
)
detect_result = detect_job.result

file_result = detect_result["items"][0]
if file_result["pageCount"] != 1:
raise Exception("The file must have one page")

ocr_result = ocr(client, file_id, 1, 1)[0]["annotations"]

input_items = [
{
"fileId": file_id,
"annotations": [ocr_annotation_to_detect_annotation(a) for a in ocr_result][
:10000
], # For now, a limit of the API
}
]

job = client.diagrams._run_job(
job_path="/convert",
status_path="/convert/",
items=input_items,
job_cls=DiagramConvertResults,
)

res = job.result

return res["items"][0]["results"][0]["svgUrl"]

# Create an SVG file with OCR overlap
create_ocr_svg(client, file_id)

Solution: When the tags are not recognized, the characters in the text are misread, or there are unexpected leading or trailing characters, allow character substitutions in the Detect annotations in engineering diagrams endpoint > Request > configuration. For example, if there are unexpected leading or trailing characters, replace the leading or trailing characters with an empty string “”.