Federated domain adaptation (FDA) is an effective method for performing over networks from distributed parties, which well improve data privacy and portability in unsupervised multi-source domain adaptation (UMDA) tasks. Despite the impressive gains achieved, two common limitations exist in current FDA works. First, most prior studies require access to the model parameters or gradient details of each source party. However, the raw source data can be reconstructed from the model gradients or parameters, which may leak individual information. Second, these works assume that different parties share an identical network architecture, which is impractical and not desirable for low- or high-resource target users. To address these issues, in this work, we point out a more practical UMDA setting, called Federated Multi-source Domain Adaptation on Black-box Models (B <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> FDA), where all data are stored locally and only the input-output interface of the source model is available. To tackle B <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> FDA, we propose an effective method, termed Co <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> -Learning with Multi-Domain Attention (Co-MDA). Experiments on multiple benchmark datasets demonstrate the efficacy of our proposed method. Notably, Co-MDA performs comparably with traditional UMDA methods where the source data or the trained model are fully available.