Abstract
In this paper, we introduce a new downsampling method for audio data called TadaStride, which can adaptively adjust the downsampling ratios across an audio data instance. Unlike previous methods using a fixed downsampling ratio, TadaStride can preserve more information from task-relevant parts of a data instance by using smaller strides for those parts and larger strides for less relevant parts. Additionally, we also introduce TadaStride-F, which is developed as a more efficient version of TadaStride while maintaining minimal performance loss. In experiments, we evaluate our TadaStride, primarily focusing on a range of audio processing tasks. Firstly, in audio classification experiments, TadaStride and TadaStride-F outperform other widely used standard downsampling methods, even with comparable memory and time usage. Furthermore, through various analyses, we provide an understanding of how TadaStride learns effective adaptive strides and how it leads to improved performance. In addition, through additional experiments on automatic speech recognition and discrete speech representation learning, we demonstrate that TadaStride and TadaStride-F consistently outperform other downsampling methods and examine how the adaptive strides are learned in these tasks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.