Music Tone Synthesis is an applied science or method that implies to identify and study of specific orchestral tone currently part of a music. It is especially effective in the area of oral music training system classes, somewhere be able to support in the practice and growth of musicians. Music Tone Synthesis provides singer and writer to produce large number of noises as well as imitate several tools and impact it cannot be flexible otherwise realistic for create over standard classical instrument. In this manuscript, Music Tone Synthesis based Anti-Interference Dynamic Integral Neural Network enhanced with artificial hummingbird Optimization algorithm (MTS-AIDINN-AHOA) is proposed. The input data are obtained from the audio signal. Then the data are pre-processing using Stein Particle Filtering (SPF) to remove the noise. The pre-processed data is given into the Two-sided Offset Quaternion Linear Canonical transform (TSOQLCT) for extracting the musical features such as melody, harmony, tempo, and dynamics. After this the extracted feature is provided to the Anti-Interference Dynamic Integral Neural Network (AIDINN) is used for the music tone synthesis and it is classified as pitch, chronaxie, volume, tone color. In general, the Anti-Interference Dynamic Integral Neural Network (AIDINN) does no express adapting optimization strategies to determine ideal parameters to assure precise prediction. Thus, it is proposed to utilize the Artificial Hummingbird Optimization Algorithm enhancement AIDINN for Music Tone Synthesis. The proposed MTS-AIDINN-AHOA method is implemented on MATLAB. Then performance of proposed technique is evaluated to other existing techniques. The proposed technique attains 26.36%, 20.69% and 35.29% higher accuracy, 19.23%, 23.56%, and 33.96% higher precision, 26.28%, 31.26%, and 19.66%higher recall, 28.96%, 33.21% and 23.89%higher specificity comparing with the existing methods such as a research on Musical Tone Recognition Method Based on Improved RNN for Vocal Music Teaching Network Courses (MTS-RNN), Music Timbre Extracted from Audio Signal Features (MTS-BPNN)and Feature Extraction and Categorization of Music Content Based on Deep Learning(MTS-SMNN) respectively.