Abstract

Two major techniques are commonly used to meet real-time inference limitations when distributing models across resource-constrained IoT devices: (1) model parallelism () and (2) class parallelism (). In, transmitting bulky intermediate data (orders of magnitude larger than input) between devices imposes huge communication overhead. Although solves this problem, it has limitations on the number of sub-models. In addition, both solutions are fault intolerant, an issue when deployed on edge devices. We propose variant parallelism (), an ensemble-based deep learning distribution method where different variants of a main model are generated and can be deployed on separate machines. We design a family of lighter models around the original model, and train them simultaneously to improve accuracy over single models. Our experimental results on six common mid-sized object recognition datasets demonstrate that our models can have 5.87.1× fewer parameters, 4.331× fewer multiply-accumulations (MACs), and 2.513.2× less response time on atomic inputs compared to MobileNetV2 while achieving comparable or higher accuracy. Our technique easily generates several variants of the base architecture. Each variant returns only 2k outputs 1≤k≤#classes2, representing Topk classes, instead of tons of floating point values required in . Since each variant provides a full-class prediction, our approach maintains higher availability compared with and in presence of failure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call