Abstract

Abstract. Historical images provide a valuable source of information exploited by several kinds of applications, such as the monitoring of cities and territories, the reconstruction of destroyed buildings, and are increasingly being shared for cultural promotion projects through virtual reality or augmented reality applications. Finding reliable and accurate matches between historical and present images is a fundamental step for such tasks since they require to co-register the present 3D scene with the past one. Classical image matching solutions are sensitive to strong radiometric variations within the images, which are particularly relevant in these multi-temporal contexts due to different types of sensitive media (film/sensors) employed for the image acquisitions, different lighting conditions and viewpoint angles. In this work, we investigate the actual improvement provided by recent deep learning approaches to match historical and nowadays images. As learning-based methods have been trained to find reliable matches in challenging scenarios, including large viewpoint and illumination changes, they could overcome the limitations of classic hand-crafted methods such as SIFT and ORB. The most relevant approaches proposed by the research community in the last years are analyzed and compared using pairs of multi-temporal images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call