Abstract

ABSTRACTAugmented Reality for all practical purposes requires extensive computation, accurate view alignment and real-time performance. To address some of these limitations, an improved method of feature detection is proposed using Maximally Stable Extremal Regions. The approach used for feature detection extracts the regions of interest using a true flood fill approach for building and maintaining the component tree. This approach has true worst-case linear complexity (Linear-MSER). In the present work, Linear-MSER is implemented at multiple scales of an image in order to increase the affine invariance properties of the detector (MSLinear-MSER).The two detectors, Linear-MSER and MSLinear-MSER, are then combined separately with Scale Invariant Feature Transform and Speeded-Up Robust Feature descriptors for performance comparison. Performance evaluation is done under varying imaging conditions like changes in viewpoint, scale, blur, illumination and JPEG compression. Results show that, MSLinear-MSER+SIFT performs best in terms of time-complexity and number of keypoint matches when executed at six octaves and five levels. This observation is true for all image-sets taken into consideration, containing images that are affine transformed in one way or other. To exhibit the efficiency of MSLinear-MSER+SIFT, a prototype of an AR system is also developed and discussed in this article using this approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.