Abstract

Semantic scene understanding using thermal images has received great attention due to the advantage that thermal imaging cameras could see in challenging illumination conditions. However, thermal images are lack of color information and the edges in thermal images are often blurred, making them not very suitable to be directly used by existing semantic segmentation networks that are designed with RGB images. To address this problem, we propose a cross-modal edge-privileged knowledge distillation framework, which utilizes a well-trained RGB-Thermal fusion-based semantic segmentation network with edge-privileged information as the teacher, to guide the training of a semantic segmentation network as the student. The student network only uses thermal images. The experimental results on a public dataset demonstrate that under the guidance of the teacher, the student network achieves superior performance over the state of the arts using only thermal images. Our code is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/lab-sun/CEKD</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call