Abstract

As computations in machine-learning applications are increasing simultaneously along the size of datasets, the energy and performance costs of data movement dominate that of compute. This issue is more pronounced in embedded systems with limited resources and energy. Although near-data-processing (NDP) is pursued as an architectural solution, comparatively less attention has been focused on how to scale NDP for larger-scale embedded machine learning applications (e.g., speech and motion processing). We propose machine-learning hardware acceleration using a software-defined intelligent memory system (Mahasim). Mahasim is a scalable NDP-based memory system, in which application performance scales with the size of data. The building blocks of Mahasim are the programable memory slices, supported by data partitioning, compute-aware memory allocation, and an independent in-memory execution model. For recurrent neural networks, Mahasim shows up to 537.95 GFLOPS/W energy efficiency and 3.9x speedup, when the size of the system increases from 2 to 256 memory slices, which indicates that Mahasim favors larger problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call