Machine learning algorithms (MLAs) perform better when enough high-quality training data is provided. However, a lack of training data is frequent in seismic facies classification and many other supervised learning applications. Data labeling for seismic facies classification is time-consuming and requires considerable effort from the domain knowledge expert. This study investigates the effect of training data size on the performance of three popular supervised MLAs used for seismic facies classification. We labeled slices from two seismic datasets of diverse geologic environments and varying classification complexity. AN Field in Malay Basin represents a simple classification problem with three classes, whereas a more complex six classes classification is defined in the Dangerous Grounds (DG) dataset offshore Sabah. The labeled data were constantly reduced by half, resulting in eight training subsets of varying sizes. We trained and evaluated support vector machine (SVM), random forest (RF), and neural network (NN) models using a 10-fold cross-validation (CV) procedure. Performance metrics were computed to study the change in performance in response to the training data size. The experimental results show that, for the DG dataset, where the classification is complex due to the heterogeneous geology and a more number of classes, the larger the training subset, the better the classification performance. Nevertheless, for the simple classification scenario of the AN dataset, the classifiers reached a performance plateau when trained on limited samples. We found that the NN model is the best performer on large datasets. The RF classifier performed well in both datasets. It proved to be robust when trained on limited samples of the DG data. The SVM performed the best where there was a clear margin of separation between the defined classes (the AN data). In contrast, it performed poorly on the DG data and exhibited a performance decline on the AN large subsets.