Abstract

Object matching in non-overlapped scenes of multi-cameras is a challenging task, due to a large number of factors, e.g. complex backgrounds, illumination variance, pose of observed object, viewpoint and image resolutions of different cameras, shadows and occlusions. For an object, matching its observations with variant appearances in such context usually turns to evaluate their similarity over some sophisticatedly chosen image features. We observe that certain feature is usually robust to certain variance, e.g. SIFT is robust to the variance in viewpoint and scale. We mean that incorporating the abilities of a bag of such features would reach a better performance. Based on these observations and insights, we propose an adaptive feature-fusion algorithm. The algorithm, first, evaluates the matching accuracy of four sophisticatedly chosen and well validated features: color histogram, UV chromaticity, major color spectrum and SIFT, using exponential models of entropy as similarity measure. Second, an adaptive fusion algorithm is presented to fuse a bag of features for a collaborative similarity measure. Our approach is shown to be able to adaptively and dynamically reduce the variances of object appearances caused by multiple factors. Experimental results show that our approach applied to human matching reaches a high robustness and matching accuracy in comparison with the previous fusion methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.