Abstract
In this paper, we apply a genetic algorithm (GA) for feature selection, wrapper approach, on wavelet scattering (WS) second-order coefficients to reduce the large frequency dimension (>500). The evaluation demonstrates the capability of GA to reduce the dimension space by approximately 30% while ensuring a minimum performance drop. Furthermore, the reduced WS directly impacts the training time of the convolutional neural network, by reducing the computational time by 20% to 32%. The paper extends its scopes to explore GA for feature selection on multiple timescales of WS: 46ms, 92ms, 185ms, and 371ms. Incorporating multiple timescales has improved classification performance (by ~ around 2.5%) as an acoustic representation usually contains information at different time scales. However, it can increase computational cost due to the larger frequency dimension of 1851. With the application of GA for feature selection, the frequency dimension is reduced by 50%, saving around 40% computational time, thus increasing the classification performance by 3% compared to a vanilla WS. Lastly, the entire implementations are evaluated using the Detection and Classification of Acoustic Scenes and Events (DCASE) 2020 dataset, and the proposed multiple timescales model achieves 73.32% of classification accuracy.
Highlights
Wavelet Scattering (WS) or known as Deep Scattering Spectrum (DSS), is a signal processing technique developed by Stephane Mallat [1-4] to recover the loss of information present in Log mel-spectrogram, especially when given a timescale larger than 25ms
We evaluated the effectiveness of genetic algorithm (GA) for feature selection for WS
Following the framework presented by [21], we proposed our implementation of GA for WS, which has a meta-heuristic nature to find the global optimal feature subset
Summary
Wavelet Scattering (WS) or known as Deep Scattering Spectrum (DSS), is a signal processing technique developed by Stephane Mallat [1-4] to recover the loss of information present in Log mel-spectrogram, especially when given a timescale larger than 25ms. This paper chooses the feature selection technique as the approach to dimensionality reduction. Employing a heuristic technique [13] is more effective to find the optimal feature subset without searching through every single. Common Heuristic search techniques such as sequential forward/backward selection, "plus-l-take away-r" [14-16], sequential forward/ backward floating selection are fast and easy to implement They can only provide local decisions without a global optimal solution. Extending to our search for optimal feature subset for WS, this paper proposed combining multiple timescales of WS to further exploit GA's searching capabilities. This paper proposed the application of GA for feature selection on WS to reduce the large frequency dimension of WS and combine multiple timescales effectively. We redefine how multiple timescales of WS can be combined using GA for the feature selection framework.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.