Abstract Optimizing neural network architectures through effective pruning techniques has become essential to balancing model complexity and accuracy. This study introduces a novel correlation-based approach to systematically reduce network size by identifying and removing redundant neurons based on their activation correlations. By selectively pruning neurons while compensating for their contributions, the method maintains model fidelity across diverse datasets. Results demonstrate substantial architecture reductions with minimal performance impact: For the MNIST dataset, the number of neurons in hidden layers was reduced from 128-128 to 118-93, while maintaining a high accuracy of 97.59%. Comparative analysis indicates that this pruning approach achieves competitive or superior results compared to state-of-the-art methods while reducing computational complexity and memory requirements by up to 25%. The findings highlight the potential of correlation-driven pruning strategies to optimize neural networks, making them more efficient and adaptable to resource-constrained environments.
Read full abstract