A stochastic network simulation is verified when its distribution of outputs is aligned with the ground truth, while tolerating deviations due to variability in real-world measurements and the randomness of a stochastic simulation. However, comparing distributions may yield false positives, as erroneous simulations may have the expected distribution yet present aberrations in low-level patterns. For instance, the number of sick individuals may present the right trend over time, but the wrong individuals were infected. We previously proposed an approach that transforms simulation traces into images verified by machine learning algorithms that account for low-level patterns. We demonstrated the viability of this approach when many simulation traces are compared with a large ground truth data set. However, ground truth data are often limited. For example, a publication may include few images of their simulation as illustrations; hence, teams that independently re-implement the model can only compare low-level patterns with few cases. In this paper, we examine whether our approach can be utilized with very small data sets (e.g., 5–10 images), as provided in publications. Depending on the network simulation model (e.g., rumor spread, cascading failure, and disease spread), we show that results obtained with little data can even surpass results obtained with moderate amounts of data at the cost of variability. Although a good accuracy is obtained in detecting several forms of errors, this paper is only a first step in the use of this technique for verification; hence, future works should assess the applicability of our approach to other types of network simulations.