Abstract

Continuous Unsupervised Domain Adaptation (CUDA) can alleviate deep learning models’ performance degradation on out-of-distribution data. However, low stability, the erosion of past knowledge while adapting to new domains, remains a major challenge. Traditional proposed solutions such as (1) Approximation, or (2) Memorization fail to harmonize system’s computational load, memory resources and stability. In response, we introduce Approximate and Memorize (A&M). Compared to traditional approximation methods, A&M utilizes factorized generative models to mitigate modal collapse, offering improved computational efficiency and training stability. Moreover, compared to traditional memorization approaches, A&M boosts memory efficiency by learning compressed representations of past information rather than storing it in the raw format. A&M shows 50% less forgetting for problems with constrained memory and an extended number of domains, making A&M ideal for real-world systems due to its scalable and parallel design.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.