Abstract
Recent research has incorporated machine learning with software-defined networking to support intelligent traffic engineering. However, most frameworks only enable machine learning in remote controllers, which introduce significant signaling overhead and data forwarding costs. In this work, we present a new architecture called in-network inference (INI) to realize local learning in Neural Compute Stick (NCS), a portable device that can be connected to a programmable switch via a USB port. While NCS can flexibly extend the computing power of a switch, its limited capacity however cannot afford real-time inference for enormous traffic demands. To develop a practical local learning architecture, we design a two-phase learning framework that combines local learning with knowledge distillation and remote learning to achieve lightweight but accurate traffic classification. We further design an inference model deployment and adaptation algorithm to utilize multiple NCS devices equipped with different switches to share the inference workload of a network. Our testbed experiments show that the two-phase learning framework reduces the inference rejection rate by 46.5% and maintains the inference accuracy of 98.10%. The trace-driven simulations verify that the proposed adaptive model placement scheme considers load balancing and, hence, better utilizes the computing resources of NCS to serve dynamic inference requests.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.