Abstract

Due to advances in edge computing, image recognition via smart cameras (hereafter referred to as edge cameras) has facilitated the development of unmanned stores. However, it is very time consuming and costly to collect labeled data for training initial models for an edge camera. Although existing transfer learning can speed up model training, it must depend on a powerful centralized server with considerable human intervention, thus hindering the development of autonomous and collaborative edge learning. To address this issue, our study proposes direct edge-to-edge (e2e) collaborative transfer learning with three key technologies. The first is elite-instance-based matching to utilize and transmit only representative images for transfer learning to decrease network cost among edge cameras. The second is one-to-many e2e transfer learning, which can increase the knowledge reusability of a single source camera to build multiple target models. The last is many-to-one e2e transfer learning, which enables a target edge camera to reuse knowledge from multiple sources to further decrease the effort in labeled data collection. The experimental results show that elite-instance-based matching can effectively save up to 70% of source samples on average that need to be transmitted for initial model training, and it further improves the accuracy of existing one-to-many and many-to-one e2e transfer learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call