Abstract

Clothing attribute recognition, especially in unconstrained street images, is a challenging task for multimedia. Existing methods for multi-task clothing attribute prediction often ignore the relation between specific attributes and positions. However, the attribute response is always location-sensitive, i.e., different spatial locations have various contributions to attributes. Inspired by the locality of clothing attributes, in this paper, we introduce the attention mechanism to incorporate the impact of positions for clothing attribute prediction with only image-level annotations. However, the performance improvement is limited if we directly use the traditional spatial attention model for each task since it does not take the influence from other tasks into account. Instead, we propose a novel task-aware attention mechanism, which estimates the importance of each position across different tasks. We first evaluate a task attention network with an end-to-end multi-task clothing attribute learning architecture on the shop domain. And then, we employ curriculum learning strategy, which transfers the well-trained shop domain attribute knowledge to the street domain attribute prediction. Experiments are conducted on three clothing benchmarks, i.e., cross-domain clothing attribute dataset, woman clothing dataset, and man clothing dataset. The performance of attribute prediction demonstrates the superiority of the proposed task-aware attention mechanism over several state-of-the-art methods both in shop and street domains.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.