Abstract

With the development of Internet technology, multimedia information resources are increasing rapidly. Faced with the massive resources in the multimedia music library, it is extremely difficult for people to find the target music that meets their needs. How to realize computer analysis and perceive users’ needs for music resources has become the goal of the future development of human-computer interaction capabilities. Content-based music information retrieval applications are mainly embodied in the automatic classification and recognition of music. Traditional feedforward neural networks are prone to lose local information when extracting singing voice features. For this reason, on the basis of fully considering the impact of information persistence in the network propagation process, this paper proposes an enhanced two-stage super-resolution reconstruction residual network which can effectively integrate the learned features of each layer while increasing the depth of the network. The first stage of reconstruction is to complete the hierarchical learning of singing voice features through dense residual units to improve the integration of information. The second stage of reconstruction is mainly to perform residual relearning on the high-frequency information of the singing voice learned in the first stage to reduce the reconstruction error. In the middle of these two stages, the model introduces feature scaling and expansion convolution to achieve the dual purpose of reducing information redundancy and increasing the receptive field of the convolution kernel. A monophonic singing voice separation based on the high-resolution neural network is proposed. Because the high-resolution network has parallel subnetworks with different resolutions, it also has original resolution representations and multiple low-resolution representations, avoiding information loss caused by serial network downsampling effects and repeating multiple feature fusions to generate new semantic representations, allowing for the learning of comprehensive, high-precision, and highly abstract features. In this article, a high-resolution neural network is utilized to model the time spectrogram in order to correctly estimate the real value of the anticipated time-amplitude spectrograms. Experiments on the dataset MIR-1K show that compared with the current leading SH-4Stack model, the method in this paper has improved SDR, SIR, and SAR indicators for measuring the separation performance, confirming the effectiveness of the algorithm in this paper.

Highlights

  • Multimedia technology changes with each passing day, constantly enriching people’s daily lives and work

  • After analyzing and inspired by densely connected networks, this paper combines the advantages of deep residual networks and densely connected networks to propose an enhanced two-stage reconstruction residual network and introduces two deep learning features. e suggested twostage residual deep convolutional neural network technique is subjected to comparative experimentation and analysis in this study. e experimental environment, data preparation, and singing voice separation evaluation indicators were first introduced; second, experimental schemes based on the high-resolution network separation of songs, phase optimization algorithms, spectrum amplitude constraints, and data expansion were designed; the separation performance of each algorithm was compared and analyzed

  • Compared with the pure accompaniment, it can be seen that the three separation methods can basically predict a more accurate time spectrogram; the specific analysis of the accompaniment in the yellow box reveals that the SH4stack and U-Net methods contain “nonaccompaniment” parts. ere are errors, and the deep convolutional neural network based on the two-stage residual is relatively closer to the pure accompaniment, and the separation accuracy is improved

Read more

Summary

Introduction

Multimedia technology changes with each passing day, constantly enriching people’s daily lives and work. Music has been an important part of people’s spiritual culture [1]. It is a special language for people to place spiritual ideals, express their thoughts and feelings, and achieve mutual exchanges. As one of the Journal of Mathematics hotspots in the field of signal and information processing, music separation is an important part of music technology research [5]. E experimental environment, data preparation, and singing voice separation evaluation indicators were first introduced; second, experimental schemes based on the high-resolution network separation of songs, phase optimization algorithms, spectrum amplitude constraints, and data expansion were designed; the separation performance of each algorithm was compared and analyzed After analyzing and inspired by densely connected networks, this paper combines the advantages of deep residual networks and densely connected networks to propose an enhanced two-stage reconstruction residual network and introduces two deep learning features. e suggested twostage residual deep convolutional neural network technique is subjected to comparative experimentation and analysis in this study. e experimental environment, data preparation, and singing voice separation evaluation indicators were first introduced; second, experimental schemes based on the high-resolution network separation of songs, phase optimization algorithms, spectrum amplitude constraints, and data expansion were designed; the separation performance of each algorithm was compared and analyzed

Related Work
Pretreatment
Experiment and Result Analysis
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call