Abstract

AbstractIf we have been there before, we realize that viewing a single photograph is sufficient to understand where the picture was captured. This fact highlights the impact of appearance cues in localization tasks [1,2,3]. Historically, visual place recognition depicts a related task, studied intensively by the researchers in computer vision society within a broad spectrum of applications [4], including 3D reconstruction [5], map fusion [6], augmented reality [7], and structure-from-motion [8]. However, visual place recognition in robotics is somehow different. Since the knowledge of an environment is a prerequisite for complex robotics tasks, it is essential for the vast majority of localization implementations or re-localization and loop-closure detection pipelines within simultaneous localization and mapping (SLAM). Generally, it identifies the ability of a system to match a previously visited place using onboard computer vision tools. In robotics, the recognizer has to generalize as much as possible, in the sense that it should support robust associations among different recordings of the same place against viewpoint and environmental variations, under run-time, storage, and processing power restrictions. Moreover, the application requirements for both domains are also driving this differentiation. The robotics community focuses on having highly confident estimates when predicting a revisited place, e.g., to perform visual loop-closure detection. At the same time, the researchers in computer vision typically prefer to retrieve as many prospective matches of a query image as possible, e.g., for 3D-model reconstruction [9]. More specifically, the former has to identify only one reference candidate associated with the previously visited traversal under varied conditions, while the latter can retrieve more matches, corresponding to a broad collection of images. Furthermore, in robotics, visual place recognition involves sequential imagery, significantly affecting the recognizer’s performance. Finally, visual place recognition in robotics, apart from performing a topological constraint that indicates the same place in the database, produces geometric information that can be used to correct the trajectory, such as in the case of loop-closure detection. A map diagram of the topics discussed in this chapter is depicted in Fig. 1.1. Light grey boxes indicate the topics described in brief, while darker boxes are the ones presented in detail.Diagram depicting the taxonomy of topics discussed in the chapter. Starting from the general concept of place recognition in computer vision and robotics, how visual place recognition is used into SLAM pipelines to address the task of loop-closure detection is described. The darker colored topics are the ones described in detail within this book, while the lighter ones are briefly reported

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call