Abstract
Memory-augmented neural networks (MANNs) were introduced to handle long-term dependent data efficiently. MANNs have shown promising results in question answering (QA) tasks that require holding contexts for answering a given question. As demands for QA on edge devices have increased, the utilization of MANNs in resource-constrained environments has become important. To achieve fast and energy-efficient inference of MANNs, we can exploit application-specific hardware accelerators on field-programmable gate arrays (FPGAs). Although several accelerators for conventional deep neural networks have been designed, it is difficult to efficiently utilize the accelerators with MANNs due to different requirements. In addition, characteristics of QA tasks should be considered for further improving the efficiency of inference on the accelerators. To address the aforementioned issues, we propose an inference accelerator of MANNs on FPGA. To fully utilize the proposed accelerator, we introduce fast inference methods considering the features of QA tasks. To evaluate our proposed approach, we implemented the proposed architecture on an FPGA and measured the execution time and energy consumption for the bAbI data set. According to our thorough experiments, the proposed methods improved speed and energy efficiency of the inference of MANNs up to about 25.6 and 28.4 times, respectively, compared with those of CPU.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have