Abstract

The twelve papers in this special section focus on machine learning architectures and accelerators. Deep learning or deep neural networks (DNNs), as one of the most powerful machine learning techniques, has achieved extraordinary performance in computer vision and surveillance, speech recognition and natural language processing, healthcare and disease diagnosis, etc. Various forms of DNNs have been proposed, including Convolutional Neural Networks, Recurrent Neural Networks, Deep Reinforcement Learning, Transformer model, etc. Deep learning exhibits an offline training phase to derive the weight parameters from an excessive training dataset, as well as an online inference phase to perform classification/prediction/perception/ control tasks based on the trained model. The paper in this section aim to find a convergence of software and hardware/architecture. It aims at DNN algorithms, parallel computing, and compiler code generation techniques that are hardware/architecture friendly, as well as computer architectures that are universal and consistently highly performant on a wide range of DNN algorithms and applications. In this co-design and co-optimization framework we can mitigate the limitation of investigating in only a single direction, shedding some light on the future of embedded, ubiquitous artificial intelligence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call