Abstract
Recently, studies on early-exit mechanisms have emerged to reduce the computational cost during the inference process of deep learning models. However, most existing early-exit architectures simply determine early exiting based only on a target confidence level in the prediction, without any consideration of the computational cost. Such an early-exit criterion fails to balance accuracy and cost, making it difficult to use in various environments. To address this problem, we propose a novel, cost-effective early-exit architecture in which an early-exit criterion is designed based on the Markov decision process (MDP). Since the early-exit decisions within an early-exit model are sequential, we model them as an MDP problem to maximize accuracy as much as possible while minimizing the computational cost. Then, we develop a cost-effective early-exit algorithm using reinforcement learning that solves the MDP problem. For each input sample, the algorithm dynamically makes early-exit decisions considering the relative importance of accuracy and computational cost in a given environment, thereby balancing the trade-off between accuracy and cost regardless of the environment. Consequently, it can be used in various environments, even in a resource-constrained environment. Through extensive experiments, we demonstrate that our proposed architecture can effectively balance the trade-off in different environments, while the existing architectures fail to do so since they focus only on reducing their cost while preventing the degradation of accuracy.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have