Abstract

Emerging hardware technology enables the utilization of a large number of multitype sensors with diverse sensing capabilities for data collection and the training of AI models. Each sensor collects partial data samples on a type-specific feature space, which results in hybrid data partitioning across the local datasets and brings challenges to developing a novel communication-efficient and scalable training algorithm. We propose a hierarchical federated learning framework for such hybrid data partitioning with a multitier-partitioned neural network architecture. Specifically, we adopt a primal-dual transform to decompose the training problem on both the sample and feature space. Then, a stochastic coordinate gradient descent ascent algorithm is implemented with intratype and intertype over-the-air aggregation for the update of the primal variables and dual variables, respectively. The incorporation of over-the-air aggregation for signal transmission naturally harnesses the channel perturbations and interference for lower communication complexity and preserved privacy. Despite the influence of transmission noise and channel distortion, convergence analysis is provided for general objective functions, which illustrates the robust training performance of the proposed algorithm with a theoretical guarantee.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call