Many successful methods developed for medical image analysis based on machine learning use supervised learning approaches, which often require large datasets annotated by experts to achieve high accuracy. However, medical data annotation is time-consuming and expensive, especially for segmentation tasks. To overcome the problem of learning with limited labeled medical image data, an alternative deep learning training strategy based on self-supervised pretraining on unlabeled imaging data is proposed in this work. For the pretraining, different distortions are arbitrarily applied to random areas of unlabeled images. Next, a Mask-RCNN architecture is trained to localize the distortion location and recover the original image pixels. This pretrained model is assumed to gain knowledge of the relevant texture in the images from the self-supervised pretraining on unlabeled imaging data. This provides a good basis for fine-tuning the model to segment the structure of interest using a limited amount of labeled training data. The effectiveness of the proposed method in different pretraining and fine-tuning scenarios was evaluated based on the Osteoarthritis Initiative dataset with the aim of segmenting effusions in MRI datasets of the knee. Applying the proposed self-supervised pretraining method improved the Dice score by up to 18% compared to training the models using only the limited annotated data. The proposed self-supervised learning approach can be applied to many other medical image analysis tasks including anomaly detection, segmentation, and classification.