Continuous Unsupervised Domain Adaptation (CUDA) can alleviate deep learning models’ performance degradation on out-of-distribution data. However, low stability, the erosion of past knowledge while adapting to new domains, remains a major challenge. Traditional proposed solutions such as (1) Approximation, or (2) Memorization fail to harmonize system’s computational load, memory resources and stability. In response, we introduce Approximate and Memorize (A&M). Compared to traditional approximation methods, A&M utilizes factorized generative models to mitigate modal collapse, offering improved computational efficiency and training stability. Moreover, compared to traditional memorization approaches, A&M boosts memory efficiency by learning compressed representations of past information rather than storing it in the raw format. A&M shows 50% less forgetting for problems with constrained memory and an extended number of domains, making A&M ideal for real-world systems due to its scalable and parallel design.