Abstract
Recent work on developing training methods for reduced precision Deep Convolutional Networks show that these networks can perform with similar accuracy to full precision networks when tested on a classification task. Reduced precision networks decrease the demand on the memory and computational power capabilities of the computing platform. This paper investigates the impact of reduced precision deep Recurrent Neural Networks (RNNs) when trained on a regression task, in this case, a monaural source separation task. The effect of reduced precision nets is explored for two popular recurrent network architectures: Vanilla RNNs and RNNs using Long-Short Term Memory (LSTM) units. The results show that the performance of the networks as measured by blind source separation metrics and speech intelligibility tests on two datasets, show very little decrease even when the weight precision goes down to 4 bits.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.