Abstract

Collaborative inference in mobile edge computing (MEC) enables mobile devices to offload the computation tasks for the computation-intensive perception services, and the inference policy determines the inference latency and energy consumption. The optimal inference policy depends on the inference performance model of deep learning, the data generation model and the network model that are rarely known by mobile devices in time. In this paper, we propose a multi-agent reinforcement learning (RL) based energy-efficient MEC collaborative inference scheme, which enables each mobile device to choose both the partition point of deep learning and the collaborative edge of each mobile device based on the image quantity, the channel conditions and the previous inference performance. A learning experience exchange mechanism exploits the Q-values of the neighboring mobile devices to accelerate the inference policy optimization with less energy consumption. We also provide a deep multi-agent RL based inference scheme to accelerate learning for large-scale MEC networks, in which an actor network yields the collaborative inference policy probability distribution and a critic network guides the weight update of the actor network to enhance sample efficiency. We provide the inference performance bound and analyze the computational complexity. Both simulation and experimental results show that our proposed schemes reduce the inference latency and save the MEC energy consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call