Abstract

Panoptic segmentation is a computer vision task that aims to identify and analyze all objects present in an image. While semantic segmentation focuses on labeling each pixel in an image with a category label, panoptic segmentation goes further by not only assigning semantic labels but also identifying and distinguishing individual instance of objects. This task is valuable for various applications, such as robotics, surveillance systems or autonomous vehicle navigation. In this work, we propose a new informed deep learning approach that combines the strengths of deep neural networks for panoptic segmentation with additional knowledge about spatial relationships between objects. This is particularly important as spatial relationships can provide useful cues for resolving ambiguities, distinguishing between overlapping or similar object instances, and capturing the holistic structure of the scene. We propose a novel training methodology that integrates knowledge directly into the deep neural network optimization process. Our approach includes a process for extracting and representing spatial relationships knowledge, which is incorporated into the training using a specially designed loss function. The effectiveness of the proposed method is evaluated and validated on various challenging datasets, namely CityScapes, KITTI, IDD and COCO datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call