Abstract

In recent decades, methods to find invariant points in digital images, called fiducial points, have gained great attention, mainly due to the demands of several applications in computer vision and image processing, such as the geometric matching of global structures, objects or specific regions. Among the most well known approaches are algorithms like SIFT, HOG, SURF, and their variations as A-SIFT, PCA-SIFT, surrounded by many others. Though the number of researches demonstrating the efficiency of such methods is undoubtedly enormous, the vast majority compares their performances only on pairs of images with little changes in view perspectives, objects or specific regions of the scenes. Thereby, the study of this type of technique under large viewpoint changes, called here LVC, has received little attention from the researchers. On the other hand, with the aim of filtering points of interest, most techniques have used the traditional extensive statistics. However, methods for image processing based on a new type of statistics, called non-extensive statistics, have shown to be efficient in several applications. In this paper, we present a new method, called q-SIFT, based on the non-extensive Tsallis statistics, to find fiducial points in a sequence of frames of videos under large viewpoints changes. We experimentally show the efficiency of the proposed method in video databases and propose new measurement metrics for this type of algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call