Seismic facies classification is an important task in seismic interpretation that allows the identification of rock bodies with similar physical characteristics. Manual labeling of seismic data is immensely time consuming, given the recent surge in data volumes. Self-supervised learning (SSL) enables models to learn powerful representations from unlabeled data, thereby improving performance in downstream tasks using limited labeled data. We investigate the effectiveness of SSL for efficient facies classification by evaluating various convolutional and vision transformer-based models. We pretrain the models on image reconstruction and fine-tune them on facies segmentation. Results on the southern North Sea F3 seismic block in the Netherlands and the Penobscot seismic volume in the Sable Subbasin, offshore Nova Scotia, Canada, show that SSL has comparable performance to supervised learning using only 5%–10% labeled data. Further, SSL exhibits stable domain adaptation on the Penobscot data set even with 5% labeled data, indicating an improved generalization compared with the supervised learning setup. The findings demonstrate that SSL significantly enhances model accuracy and data efficiency for seismic facies classification.