A number of authors have focused on this study to examine how huge data are perceived. A novel big data classification paradigm is introduced by the work’s preprocessing, feature extraction and classification techniques. Data normalization is carried out at the preprocessing stage. The MapReduce framework is then utilized to manage the massive data. Statistical features (mean, median, min/max and SD), higher-order statistical features (skewness, kurtosis and enhanced entropy), and correlation-based features are all extracted prior to classification. The Bi-LSTM and deep maxout hybrid classification model classifies the data during the reduction stage. To assure classification accuracy, training will also be deployed by the new Hybrid Butterfly Positioned Coot Optimization (HBPCO) algorithm. The proposed method’s accuracy of 97.45% beats the methods of NN (85.13%), CNN (83.78%), RNN (78.37%), Bi-LSTM (82.43%) and SVM (87.83%).