Abstract
In recent decades, methods to find invariant points in digital images, called fiducial points, have gained great attention, mainly due to the demands of several applications in computer vision and image processing, such as the geometric matching of global structures, objects or specific regions. Among the most well known approaches are algorithms like SIFT, HOG, SURF, and their variations as A-SIFT, PCA-SIFT, surrounded by many others. Though the number of researches demonstrating the efficiency of such methods is undoubtedly enormous, the vast majority compares their performances only on pairs of images with little changes in view perspectives, objects or specific regions of the scenes. Thereby, the study of this type of technique under large viewpoint changes, called here LVC, has received little attention from the researchers. On the other hand, with the aim of filtering points of interest, most techniques have used the traditional extensive statistics. However, methods for image processing based on a new type of statistics, called non-extensive statistics, have shown to be efficient in several applications. In this paper, we present a new method, called q-SIFT, based on the non-extensive Tsallis statistics, to find fiducial points in a sequence of frames of videos under large viewpoints changes. We experimentally show the efficiency of the proposed method in video databases and propose new measurement metrics for this type of algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.