This paper proposes a new unsupervised domain adaptation framework, named as Collaborative Appearance and Semantic Adaptation (CASA), for addressing the medical domain mismatch problem. Domain adaptation techniques have become one of the hot topics, especially when applying the established deep neural network into new domains in the medical analysis, i.e., semantic segmentation of medical lesions. To achieve unsupervised domain adaptation, our designed CASA framework could preserve synergistic fusion of adaptation knowledge from the perspectives of appearance and semantic. To be specific, we transform the appearance of medical lesions across domains via a Characterization Transfer Module (CTM), which can mitigate the appearance divergence of medical lesions across domains. Meanwhile, a Representation Transfer Module (RTM) is proposed via incorporating with a conditional generative adversarial network, which could transform features of source lesions to target-like feature, and further narrow the domain-wise distribution gap of underlying semantic knowledge. To the end, a challenging application of medical image segmentation is used to extensively validate the effectiveness of our proposed CASA framework. Various experiment results show its superior performance by a significant margin when comparing to the state-of-the-art domain adaptation methods.