Abstract

AbstractModels that learn from data are widely and rapidly being deployed today for real‐world use, but they suffer from unforeseen failures that limit their reliability. These failures often have several causes such as distribution shift; adversarial attacks; calibration errors; scarcity of data and/or ground‐truth labels; noisy, corrupted, or partial data; and limitations of evaluation metrics. But many failures also occur because many modern AI tasks require reasoning beyond pattern matching and such reasoning abilities are difficult to formulate as data‐based input–output function fitting. The reliability problem has become increasingly important under the new paradigm of semantic “multimodal” learning. In this article, I will discuss findings from our work to provide avenues for the development of robust and reliable computer vision systems, particularly by leveraging the interactions between vision and language. This article expands upon the invited talk at AAAI 2024 and covers three thematic areas: robustness of visual recognition systems, open‐domain reliability for visual reasoning, and challenges and opportunities associated with generative models in vision.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.