Abstract
AbstractPerson re‐identification (Re‐ID) is one of the most remarkable research topics that widely applied in our daily lives. For person Re‐ID in bird's eye scenes, traditional computer vision‐based methods used multiple features, for example, texture and color, of a pedestrian's head and shoulders. Those methods are difficult to cope with environments of variety and the change the appearance of different people due to the instability of feature detection. On the other hand, although recent advanced deep learning‐based methods are powerful to extract discriminative features, the requirement of a large amount of annotated training data restricts the appliable tasks. To overcome this problem, in this article, we propose a novel method fusing multiple heterogeneous features through a multi‐feature subspace representation network (MFSRN) to maximize the classification performance while keeping the disparity among features as small as possible, that is, common‐subspace constraints. We conducted comparative experiments with state‐of‐the‐art models on the bird's‐eye view person dataset, and extensive experimental results demonstrated that our proposed MFSRN could achieve better recognition performance. Furthermore, the validity and stability of the method are confirmed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.