Abstract
Smart city is a new term given to society by technology, and cameras are important infrastructure for building a smart city. How to use camera information efficiently and effectively plays an important role in people’s daily life and maintain social order. Pedestrian information accounts for a large proportion of camera information, so we hope to make good use of pedestrian information. Previous works use traditional machine learning methods and neural network to identify pedestrian attributes, mainly judge the existence of pedestrian attributes in natural scenes. However, it’s not enough to judge whether an attribute exist or not, getting the position of an attribute often gives you more information. In this paper, we propose to use semantic segmentation to obtain the position information of pedestrian attributes. We first propose pedestrian attribute semantic dataset in natural scene called PASD (Pedestrian attribute semantic dataset), which select 27 visualized pedestrian attributes. Deeplabv3+ is used to perform experiments on PASD, which obtain the mIoU (mean intersection over union) baseline of 27 pedestrian attributes. For getting useful conclusion, we conduct data analysis about mIoU from three aspects: attribute distribution, accuracy and resolution.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.