Abstract

Transformer is a popular machine learning model used by many intelligent applications in smart cities. However, it has high computational complexity and it would be hard to deploy it in weak-edge devices. This paper presents a novel two-round offloading scheme, called A-MOT, for efficient transformer inference. A-MOT only samples a small part of image data and sends it to edge servers, with negligible computational overhead at edge devices. The image is recovered by the server with the masked autoencoder (MAE) before the inference. In addition, an SLO-adaptive module is intended to achieve personalized transmission and effective bandwidth utilization. To avoid the large overhead on the repeat inference in the second round, A-MOT further contains a lightweight inference module to save inference time in the second round. Extensive experiments have been conducted to verify the effectiveness of the A-MOT.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call