Deep learning demonstrates impressive performance in many medical image analysis tasks. However, its reliability builds on the labeled medical datasets and the assumption of the same distributions between the training data (source domain) and the test data (target domain). Therefore, some unsupervised medical domain adaptation networks transfer knowledge from the source domain with rich labeled data to the target domain with only unlabeled data by learning domain-invariant features. We observe that conventional adversarial-training-based methods focus on the global distributions alignment and may overlook the class-level information, which will lead to negative transfer. In this paper, we attempt to learn the robust features alignment for the cross-domain medical image analysis. Specifically, in addition to a discriminator for alleviating the domain shift, we further introduce an auxiliary classifier to achieve robust features alignment with the class-level information. We first detect the unreliable target samples, which are far from the source distribution via diverse training between two classifiers. Next, a cross-classifier consistency regularization is proposed to align these unreliable samples and the negative transfer can be avoided. In addition, for fully exploiting the knowledge of unlabeled target data, we further propose a within-classifier consistency regularization to improve the robustness of the classifiers in the target domain, which enhances the unreliable target samples detection as well. We demonstrate that our proposed dual-consistency regularizations achieve state-of-the-art performance on multiple medical adaptation tasks in terms of both accuracy and Macro-F1-measure. Extensive ablation studies and visualization results are also presented to verify the effectiveness of each proposed module. For the skin adaptation results, our method outperforms the baseline and the second-best method by around 10 and 4 percentage points. Similarly, for the COVID-19 adaptation task, our model achieves consistently the best performance in terms of both accuracy (96.93%) and Macro-F1 (86.52%).