Abstract

Past research in computer vision has shown that image interpretation is a highly underconstrained task. Information fusion from multiple cues from the same image and from multiple views using the same modality have been marginally successful. Recently the fusion of information from different modalities of sensing has been studied to further constrain the interpretation. This paper presents an overview of approaches developed for image segmentation and analysis using multi-sensor fusion. We present examples of three systems using different modalities. These examples include a system for image segmentation and interpretation using ladar (laser radar) and thermal images, a system using registered thermal and visual images for surface heat flux analysis, and an image synthesis system that generates visual and thermal images based on the internal heat flow in objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call