Abstract
Recent research has drawn attention to the ambiguity surrounding the definition and learnability of Out-of-Distribution recognition. Although the original problem remains unsolved, the term “Out-of-Model Scope” detection offers a clearer perspective. The ability to detect Out-of-Model Scope inputs is particularly beneficial in safety-critical applications such as autonomous driving or medicine. By detecting Out-of-Model Scope situations, the system’s robustness is enhanced and it is prevented from operating in unknown and unsafe scenarios. In this paper, we propose a novel approach for Out-of-Model Scope detection that integrates three sources of information: (1) the original input, (2) its latent feature representation extracted by an encoder, and (3) a synthesized version of the input generated from its latent representation. We demonstrate the effectiveness of combining original and synthetically generated inputs to defend against adversarial attacks in the computer vision domain. Our method, TRust Your GENerator (TRYGEN), achieves results comparable to those of other state-of-the-art methods and allows any encoder to be integrated into our pipeline in a plug-and-train fashion. Through our experiments, we evaluate which combinations of the encoder’s features are most effective for discovering Out-of-Model Scope samples and highlight the importance of a compact feature space for training the generator.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.