Abstract

The regularized least squares for sparse reconstruction is gaining popularity as it has the ability to reconstruct speech signal from a noisy observation. The reconstruction relies on the sparsity of speech, which provides the demarcation from noise. However, there is no measure incorporated in the sparse reconstruction to optimize on the overall speech quality. This paper proposes a two-level optimization strategy to incorporate the quality design attributes in the sparse solution in compressive speech enhancement by hyper-parameterizing the tuning parameter. The first level involves the compression of the big data and the second level optimizes the tuning parameter by using different optimization criteria (such as Gini index, the Akaike information criterion (AIC) and Bayesian information criterion (BIC)). The set of solutions can then be measured against the desired design attributes to achieve the best trade-off between suppression and distortion. Numerical results show the proposed approach can effectively fuse the trade-offs in the solutions for different noise profile in a wide range of signal to noise ratios (SNR).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.