Abstract

This paper proposes a novel method capable of both detecting OOD data and generating in-distribution data samples. To achieve this, a VAE model is adopted and augmented with a memory module, providing capacities for identifying OOD data and synthesising new in-distribution samples. The proposed VAE is trained on normal data and the memory stores prototypical patterns of the normal data distribution. At test time, the input is encoded by the VAE encoder; this encoding is used as a query to retrieve related memory items, which are then integrated with the input encoding and passed to the decoder for reconstruction. Normal samples reconstruct well and yield low reconstruction errors, while OOD inputs produce high reconstruction errors as their encodings get replaced by retrieved normal patterns. Prior works use memory modules for OOD detection with autoencoders, but this method leverages a VAE architecture to enable generation abilities. Experiments conducted with CIFAR-10 and MNIST datasets show that the memory-augmented VAE consistently outperforms the baseline, particularly where OOD data resembles normal patterns. This notable improvement is due to the enhanced latent space representation provided by the VAE. Overall, the memory-equipped VAE framework excels in identifying OOD and generating creative examples effectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.