Medical imaging techniques have been widely used for diagnosis of various diseases. However, the imaging-based diagnosis generally depends on the clinical skill of radiologists. Computer-aided diagnosis (CAD) can help radiologists improve diagnostic accuracy as well as the consistency and reproducibility. Although convolutional neural network (CNN) has shown its feasibility and effectiveness in CAD, it generally suffers from the problem of small sample size when training CAD models. Nowadays, self-supervised learning (SSL) has shown its effectiveness in the field of medical image analysis, especially when there are only limited training samples. However, the backbone of downstream task sometimes cannot be well pre-trained in the conventional SSL framework due to the limitation of the pretext task and fine-tuning mechanism. In this work, an improved SSL framework, named Hybrid-supervised Bidirectional Transfer Networks (HBTN), is proposed to improve the performance of CAD models. Specifically, a novel Gray-Scale Image Mapping (GSIM) task is developed, which still takes the widely used image restoration task in SSL as the pretext task, but further embeds the class label information into it to improve discriminative feature learning of its corresponding network model. The proposed HBTN then integrates two different network architectures, i.e. the image restoration network for the pretext task and the classification network for the downstream task, into a unified hybrid-supervised learning (HSL) framework. It jointly trains both networks and collaboratively transfers the knowledge between each other. Consequently, the performance of downstream network is thus improved. The proposed HBTN is evaluated on two medical image datasets for CAD tasks. The experimental results indicate that HBTN outperforms the conventional SSL algorithms for CAD with limited training samples.