Abstract

Until today, various approaches have been proposed in order to create Augmented Reality (AR) environment where virtual-real integration takes place. One of these approaches is vision-based model and it is divided into two branches as Marker-Based Augmented Reality (MBAR) and Markerless Augmented Reality (MAR). In the use of MBAR model, a reference image is introduced to the system before and when this image enters the camera view, an AR environment is created. However, in MAR model, no image is introduced to the system before. Instead, it uses natural characteristics present in the image such as edges, corners and geometrical shapes to create an AR environment. In order to use MAR model, it is necessary to use algorithms which require high processing power and memory capacity. Within the scope of this study, MAR model was chosen as reference and an evaluation on combinations of descriptive extractors (such as ORB, SIFT and SURF) and matchers (such as Bruteforce, Bruteforce-Hamming and Flannbase) was presented. In this context, it was aimed to obtain knowledge about i) the number of keypoints and detection time with use of different descriptor extractors, ii) the number of matching keypoints and the amount of positional deviation of a virtual object placed on a real world scene with use of different matchers. In line with this goal, analyzes were made using different image scales and brightness levels on both PC and mobile platforms. Results showed that, for both platforms, combinations using ORB method could work faster with less deviation than the combinations using other methods in all conditions. In addition, RANSAC algorithm was also used to reduce the total mean deviation ratio and it was seen that the rate could be reduced from 70% to 4.5%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call