Abstract

A Today availability of data for machine learning model training is challenging due to data sovereignty restrictions. There are added issues like scaling, cost, latency, etc. when transferring data from the filed or remote regions to core cloud regions for processing and training purposes. This paper focuses on the role of GPU shapes at the edge of the cloud for federated machine learning (ML) training. Federated ML training is recommended to address customer requirement around data sovereignty and other restrictions surrounding streaming or real time data transfer back to a central cloud region. Customers like to use centralized cloud regions to address impacts due to increased latency and associated link costs and lack of data diversity. This paper proposes a solution to implement federated machine learning at the Cloud’s edge point of presences using GPU based computing nodes. In addition depending on industry segments or regional requirements, the inference can also place at the edge. The training data or the inference data are aggregated at the core regions. We will discuss some new developments targeted at this space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call