Abstract
Conventional acoustic echo cancellation works by using an adaptive algorithm to identify the impulse response of the echo path. In this paper, we use the CNN neural network filter to remove the echo signal from the microphone input signal, so that only the speech signal is transmitted to the far-end. Using the neural network filter, weights are well converged by the general speech signal. Especially it shows the ability to perform stable operation without divergence even in the double-talk state, in which both parties speak simultaneously. As a result of simulation, this system showed superior performance and stable operation compared to the echo canceller of the adaptive filter structure.
Highlights
Acoustic echo is a problem when loudspeaker and near-end signals are combined at the microphone and sent to the far end
The acoustic echo signal disturbs receiving the near-end speeches in the far-end by which the received signals from the far-end in the near-end are emitted through the speaker and combined with the near-end speeches in the microphone
The elimination of the echo signal is achieved by adaptively converging the acoustic impulse response between the loudspeaker and the microphone using a FIR(finite impulse response) filter[1]
Summary
Acoustic echo is a problem when loudspeaker and near-end signals are combined at the microphone and sent to the far end. The elimination of the echo signal is achieved by adaptively converging the acoustic impulse response between the loudspeaker and the microphone using a FIR(finite impulse response) filter[1] This operates normally only in a one-way conversation in which only a far-end signal exists, and in a double-talk interval in which the near-end speeches are present, the ability to cancel an echo signal suddenly deteriorates. Since the error of the output neuron propagates backward to the hidden neuron and influences the parameter control of the hidden neuron, the gradient descent learning method of the multi-layer perceptron is named as the error back-propagation learning algorithm, and each parameter is updated by the following equation. Using the NLMS (normalized least mean square) algorithm, the weights of each layer for u > 0 are updated as follows: wj2k
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Indonesian Journal of Electrical Engineering and Informatics (IJEEI)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.