Abstract
The twenty-five papers in this special issue focus on industrial applications that deplpoy compact deep neural networks. Artificial neural netorks have been adopted for a broad range of tasks in areas like multimedia analysis and processing, media coding, data analytics, etc. Their recent success is based on the feasibility of processing much larger and complex neural networks (deep neural networks, DNNs) than in the past, and the availability of large-scale training data sets. As a consequence, the large memory footprint of trained neural networks and the high computational complexity of performing inference cannot be neglected. Many applications require the deployment of a particularly trained network instance, potentially to a larger number of devices, which may have limitations in terms of processing power and memory. For such applications, compact representations of neural networks are of increasing relevance. In spite of active research in the area, there are still open questions to be clarified concerning, for example, how to train neural networks with optimal performance while achieving compact representations, and how to achieve representations that do not only allow for compact transmission, but also for efficient inference. This issue sheds light on these important questions and provides answers that may be useful for both practitioners and researchers in the field.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Journal of Selected Topics in Signal Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.