Abstract
This January 2024 issue contains two technical papers. The first technical paper, Planter: Rapid Prototyping of In-Network Machine Learning Inference , by Changgang Zheng and colleagues, proposes a new framework to streamline the deployment of machine learning models across a wide range of hardware devices such as Intel Tofino, Xilinx/AMD Alveo and NVIDIA BlueField 2. The authors discuss the challenges of deploying machine learning algorithms into different programmable devices.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.