Abstract

Deep Neural Networks (DNNs) have become the key technique to revolutionize the healthcare sector. However, conducting online remote inference is often impractical due to privacy constraints and latency requirements. To enable local computation, researchers have attempted network pruning with minimal accuracy loss or DNN distribution without affecting the performance. Yet, distributed inference can be inefficient due to the energy overhead and fluctuation of communication channels between participants. On the other hand, given that realistic healthcare systems use pre-trained models, local pruning and retraining relying only on the available scarce data is not possible. Even pre-pruned DNNs are limited in their ability to customize to the local load of data and device dynamics. The online pruning of DNN inferences without retraining is viable; however, it was not considered in the literature as most well-known techniques do not perform well without adjustment. In this paper, we propose a novel pruning strategy using Explainable AI (XAI) to enhance the performance of pruned DNNs without retraining, a necessity due to the scarcity and bias of local healthcare data. We combine distribution and pruning techniques to perform online distributed inference assisted by dynamic pruning when needed for highest accuracy. We use Non-Linear Integer Programming (NLP) to formulate our approach as a trade-off between resources and accuracy, and Reinforcement Learning (RL) to relax the problem and adapt to dynamic requirements. Our pruning criterion shows high performance compared to other reference techniques and ability to assist distribution by reducing resource usage while keeping high accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call