Digital music resources have exploded in popularity since the dawn of the digital music age. The music genre is an important classification to use when describing music. The function of music labels in discovering and separating digital music resources is crucial. In the face of a huge music database, relying on manual annotation to classify will consume a lot of cost and time, which cannot meet the needs of the times. The following are the paper’s primary research findings and innovations: to better describe the music, this article will be divided into multiple local musical instrument digital interface (MIDI) music passages, playing style close by analyzing passages, passages feature extracting, and feature sequence of passages. Extraction of note feature matrix, extraction of topic and segment division based on note feature matrix, research and extraction of effective features based on segment theme, and composition of feature sequence are all part of the process. Because of the shallow structure of standard classification methods, it is difficult for classifiers to learn temporal and semantic information about music. This research investigates recurrent neural networks (RNN) and attention using the distinctive sequence of input MIDI segments. To create data sets and conduct music categorization tests, collect 1920 MIDI files with genre labels from the Internet. The method for music classification is validated when it is combined with the experimental accuracy of equal length segment categorization.