Abstract

When we are interested in providing a description of an object or a human, we tend to use visual attributes to accomplish this task. For example, a laptop can have a wide screen, a silver color, and a brand logo, whereas a human can be tall, female, wearing a blue t-shirt and carrying a backpack. Visual attributes in computer vision are equivalent to the adjectives in our speech. We rely on visual attributes since (i) they enhance our understanding by creating an image in our head of what this object or human looks like; (ii) they narrow down the possible related results when we want to search for a product online or when we want to provide a suspect description; (iii) they can be composed in different ways to create descriptions; (iv) they generalize well as with some finetuning they can be applied to recognize objects for different tasks; and (v) they are a meaningful semantic representation of objects or humans that can be understood by both computers and humans. However, effectively predicting the corresponding visual attributes of a human given an image remains a challenging task. In real-life scenarios, images might be of low-resolution, humans might be partially occluded in cluttered scenes, or there might be significant pose variations. In this talk, a deep learning method to solve multiple binary classification tasks will be described (e.g. “Provide the images showing an old man standing in front of a coffee-shop”). The method performs end-to-end learning by feeding a single convolutional network (ConvNet) with the entire image of a human. It will be demonstrated how both multi-task and curriculum learning may be exploited in that framework and state-of-the-art results on publicly available datasets of humans standing with their full-body visible will be discussed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.