Abstract

On the basis of Time Domain Analysis, this paper proposes a method for music beat tracking. Through this method, music beats are detected and tracked by timestamp and intensity value. Generally, the beat-areas of music signal converge more energy than other areas, therefore, the spots of beat can be filtered out by a tracking algorithm with a dynamic threshold value. In this paper, dynamic threshold value in tracking algorithm is m odified by using two sliding windows, which are Prediction Window and Detection Window. Also, a new indicator which indicates the stationarity of the signal is proposed. This factor can distinguishes the music signal with rhythmic beats from which with lone-tone and noise in time-domain. The experimental result proves the simplicity, adaptability, and robustness of this method, and it is an efficient algorithm on music beat tracking. With the development of internet, the music data human produce everyday grew explosively over the past years. It is a significant subject that how to extract the characteristic parameters from music in both music classification and music recommendation area. As an important time-domain music feature, music rhythm carries significant weight in classifying different music types. In paper (1), a new method is proposed which presents a multiple agent system that can track the beats of rhythmic music successfully. However, the precondition of experiment is the rhythm of music should be strong enough. Paper (2) introduces a method which uses autocorrelation phase-entropy, to analyze meter and tempo of music. The experimental result shows the method has a 97 percent of accuracy degree in music tempo induction on a data set of 100 songs. Another music extraction method is proposed in paper (3) which is based on low-pass Gaussian filter. Paper (4) presents an adaptive-whitening-based real-time algorithm for music beat tracking. This new algorithm can improve the performance when music signal strongly varying dynamically. Paper (5) and paper (6) present two music feature extracting methods based on audio content. The former is music emotion automatic detection by processing the melody and rhythm of music in MIDI files. The latter presents a feature extraction method based on frequency-domain. Paper (7) introduces the main methods on music audio analysis in recent years. As an important part of music, the note of music can detected by computer as well. A new method is presented in paper (8) which can extract the melody and rhythm feature by music note. In terms of the recent research, there are two issues that cause the performance problems. The first issue is threshold value that set in beat tracking algorithm is not on the basis of sufficient reason. And the second issue is, the rhythm detection algorithm has good performance only with a precondition that the music rhythm is strong enough. When the music signal is stable or the music signal consists of a lot of long-tones, the performance of the algorithm will declines significantly. To solve these problems, this paper presents a new algorithm that can track beat from the music PCM data with two sliding windows which modify the dynamic threshold value in real-time. And presents a Stability Vector that indicates the stationarity of the music signal.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.