MotivationDeep learning (DL) has revolutionized condition monitoring (CoM) in mechanical systems by reducing manual signal processing. However, DL's industrial integration is limited due to low robustness against distribution shifts. Existing CoM approaches focus on adaptating to one-time distribution shift. We introduce Continuous Unsupervised Domain Adaptation-based CoM (CUDACoM), tackling continuous distribution shifts in systems under perpetually dynamic conditions. MethodologyCUDACoM mitigates confirmation bias, detrimental in long-domain sequences, by introducing two novel strategies (1) Fresh Initialization and (2) In-Domain Pseudo Labeling. Fresh Initialization maintains high model plasticity, while In-Domain pseudo-labeling improves pseudo-label accuracy, enhancing model adaptability. These strategies reduce confirmation bias, crucial for robust self-training, making CUDACoM ideal for long-sequence domain adaptation in perpetually dynamic environments. ResultsCUDACoM outperforms state-of-the-art (SOTA) adversarial and self-training approaches. Validated through two practical case studies: a 200% (RPM) change and gradual sensor degradation across 40 noise levels. These challenging case studies show stronger data shifts than the commonly used standard benchmarking datasets. The second case is especially novel, formulating robustness to noise as a domain adaptation problem. CUDACoM achieved a test accuracy of 0.937 in the RPM case (vs. SOTA's 0.770) and 0.849 in the sensor degradation case (vs. SOTA's 0.751). ImpactThis study addresses the overlooked challenges of employing DL for CoM in perpetually dynamic environments, particularly, the confirmation bias. With CUDACoM's computational efficiency, we provide a practical solution that enhances reliability by facilitating the integration of robust DL into industrial CoM systems.