Currently developed unsupervised domain adaptation (UDA) methods have somewhat improved the prognostic performance of cross-domain RUL prediction, but only optimizing one single metric (MMD or adversarial mechanism) to reduce the domain discrepancy has limited further improvement. Moreover, learning a set of good features has been a long-standing issue in RUL prediction. To address these issues, an effective UDA method namely deep residual LSTM with Domain-invariance (DIDRLSTM) is investigated to improve the prognostic performance. First, the DRLSTM is designed as the feature extractor to learn high-level features from both source and target domains. The introduction of residual connections allows DRLSTM to add more nonlinear layers to learn the more representative degradation features. Second, two modules are integrated to further reduce the domain discrepancy. One is domain adaptation, which reduces the domain discrepancy by adding MK-MMD constraints to map the features to RHKS. The other is domain confusion, which reduces the domain discrepancy through minimizing the domain discriminative ability of the domain classifier trained under adversarial optimization strategy. Finally, the outstanding performance of DIDRLSTM is validated on C-MAPSS dataset and FEMTO-ST dataset. The experimental results show that the DIDRLSTM outperforms five state-of-the-art UDA methods.