Medical image segmentation is fundamental in the field of medical image analysis and has wide clinical applications in disease diagnosis and surgical planning etc. Current prevalent solution is to train a deep network in a fully supervised way with a large-scale fully labelled dataset. However, due to the high labor cost and requirement on medical expertise, such dataset is always absent. Instead, there are multiple partially labelled datasets which are originally established for specific purposes. To make full use of these partially labelled datasets, we propose a novel partially supervised segmentation network, named PSSNet, which consists of a task-specific feature learning network followed by a cross-task attention module (xTA) to exploit task dependencies to enhance task-specific features. To solve the challenges raised by unlabeled classes and domain shift across datasets, we propose an adversarial self-training strategy. We conduct experiments on two medical image segmentation tasks. One is the fine-grained fundus image segmentation aiming to simultaneously segment four-class lesions, OD and OC, and vessels. Validation on seven datasets demonstrates that our PSSNet performs the best among three baselines and three state-of-the-arts. The other is multiple abdominal organ segmentation in CT images. Our PSSNet is trained on three partially labelled datasets, i.e., LiTS, KiTS and Spleen. Validation on one fully labelled dataset, i.e., BTCV, demonstrates that our PSSNet achieves better performances than four state-of-the-arts. The code is publicly available at https://github.com/CVIU-CSU/PSSNet.