Predictive modeling is pervasive across a variety of industries (e.g., driving patterns in insurance, fraud detection in finance, and forecasting in energy). However, the deterioration of prediction accuracy for most machine learning (ML) models due to the drift is inevitable despite having acceptable performance in a test setting. Drift is a natural occurrence as cumulative information changes over time but should be regulated to ensure an ML model’s prediction remains relevant in real-time when decisions are being made. This study proposes a drift mitigation framework (DMF) to obtain an effective sample size in a scalable and unbiased manner for sustainable predictive performance from ML models using microgrids (MGs) as a testbed. Scenario selection was used as an alternative sampling technique to obtain an effective sample from historical data contributing to probability theory. These scenarios were evaluated using a quantile-quantile (Q-Q) plot, Welch’s T-test, and relative entropy measures to ensure the obtained sample effectively captures the general population behavior. The results showed two of the three climate factors used in our experiment formed a nearly straight line on their Q-Q plots, suggesting the effective sample retained comparable statistical properties to theoretical distributions fitted to the original population and complete training data. At an alpha equal to 0.05, a hypothesis test was conducted with the null hypothesis claiming the means of reduced and complete data were statistically equivalent for ambient temperature, solar irradiance, and wind speed. The p-values obtained for each stochastic climate factor were 1.01, 0.296, and 4.13 suggesting insufficient evidence to reject the null hypothesis. Similarly, the resulting population stability index (PSI) values for each stochastic climate factor were 0.175, 0.352, and 0.023 respectively. Hence, there was no significant shift between the population data and the effective sample across each factor.