Abstract

The increase in computational power enabled complex problems to be solved by employing new techniques from the field of Artificial Intelligence (AI) based on Deep Neural Networks (DNN) and Deep Learning (DL). A recent trend is to apply these techniques that have proven to generate excellent results to Edge computing. However, Edge computing is based on simple low power devices, which are severely restricted in terms of computational power and especially by the available memory size. Being able to pack the Neural Network parameters in the available memory efficiently is a must. Normally, memory systems expect transactions to be aligned to the bus size for maximum performance. This can result in inefficient memory utilization, as the groups of parameters required to be read in parallel need to be stored aligned in memory. In this paper, I present a memory controller to provide unaligned memory transfers at full bus width. When using this controller, the memory efficiency can be increased by 25%, while preserving the memory access time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.