Abstract

This research introduces a novel technique for determining numerous fusion score levels that works with many datasets and purposes. Each of the four system pieces works together. These are Feature Engineering, Ensemble Learning, deep neural networks (DNNs), and Transfer Learning. In feature engineering, raw data is totally transformed. This stage stresses the importance of PCA and MI for predictive power. AdaBoost is added during ensemble learning. It repeatedly teaches weak learners and adjusts weights depending on errors to create a strong ensemble model. Weighted input processing, ReLU activation, and dropout layers smoothly integrate DNNs. These reveal minor data patterns and correlations. In transfer learning (fine-tuning), a trained model is modified for the feature-engineered dataset. In comparative testing, the recommended technique had greater accuracy, precision, recall, F1 score, AUC-ROC, and training duration. Efficiency measures reduce reasoning time, memory, parameter count, model size, and energy utilization. Visualizations demonstrate resource consumption, method scores, and reasoning time distribution in research. This mathematical framework improves multilayer fusion score level computations, performs well, and is versatile in many scenarios, making it a good choice for large and diverse datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call