Abstract

The number of different modalities for remote sensors continues to grow, bringing with it an increase in the volume and complexity of the data being collected. Although these datasets individually provide valuable information, in aggregate they provide additional opportunities to discover meaningful patterns on a large scale. However, the ability to combine and analyze disparate datasets is challenged by the potentially vast parameter space that results from aggregation. Each dataset in itself requires instrument-specific and dataset-specific knowledge. If the intention is to use multiple, diverse datasets, one needs an understanding of how to translate and combine these parameters in an efficient and effective manner. While there are established techniques for combining datasets from specific domains or platforms, there is no generic, automated method that can address the problem in general. Here, we discuss the application of deep learning to track objects across different image-like data-modalities, given data in a similar spatio-temporal range, and automatically co-register these images. Using deep belief networks combined with unsupervised learning methods, we are able to recognize and separate different objects within image-like data in a structured manner, thus making progress toward the ultimate goal of a generic tracking and fusion pipeline requiring minimal human intervention.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.