Progressive techniques encompass iterative and adaptive approaches that incrementally enhance and optimize data processing by iteratively modifying the analysis process, resulting in improved efficiency and precision of outcomes. These techniques contain a range of components, such as data sampling, feature selection, and learning algorithms. This study proposes the integration of an Artificial Neural Network (ANN) with a Progressive Learning Model (PLM) to enhance the efficacy of learning from large-scale datasets. The SMOTE and Pearson Correlation Coefficient (PCC) methods are commonly employed in imbalanced dataset handling and feature selection. The utilization of progressive weight updating is a notable strategy for improving performance optimization in neural network models. This approach involves the incremental modification of the network’s progressive weights during the training phase rather than relying on gradient values. The proposed method gradually obtains the localization of discriminative data by incorporating information from local details into the overall global structure, effectively reducing the training time by iteratively updating the weights. The model has been examined using two distinct datasets: the Poker hand and the Higgs. The performance of the suggested method is compared with that of classification algorithms: Population and Global Search Improved Squirrel Search Algorithm (PGS-ISSA) and Adaptive E-Bat (AEB). The convergence of Poker’s is achieved after 50 epochs with ANN-PLM; however, without PLM, it takes 65 epochs. Similarly, with the Higgs, convergence is achieved after 25 epochs with PLM and 40 without PLM.