This study explores the intricate relationship between biomechanical movements and musical expression, focusing on the identification of musical styles and emotions. Violin performance is characterized by complex interactions between physical actions—such as bowing techniques, finger placements, and posture—and the resulting acoustic output. Recent advances in motion capture technology and sound analysis have enabled a more objective examination of these processes. However, the current literature frequently addresses biomechanics and acoustic features in isolation, lacking an integrated understanding of how physical movements translate into specific musical expressions. Machine Learning (ML), particularly Long Short-Term Memory (LSTM) networks, provides a promising avenue for bridging this gap. LSTM models are adept at capturing temporal dependencies in sequential data, making them suitable for analyzing the dynamic nature of violin performance. In this work, they have proposed a comprehensive model that combines biomechanical analysis with Mel-spectrogram-based LSTM modeling to automate the identification of musical styles and emotions in violin performances. Using motion capture systems, Inertial Measurement Units (IMUs), and high-fidelity audio recordings, we collected synchronized biomechanical and acoustic data from violinists performing various musical excerpts. The LSTM model was trained on this dataset to learn the intricate connections between physical movements and the acoustic features of each performance. Key findings from the study demonstrate the effectiveness of this integrated approach. The LSTM model achieved a validation accuracy of 92.5% in classifying musical styles and emotions, with precision, recall, and F1-score reaching 94.3%, 92.6%, and 93.4%, respectively, by the 100th epoch. The analysis also revealed strong correlations between specific biomechanical parameters, such as shoulder joint angle and bowing velocity, and acoustic features, like sound intensity and vibrato amplitude.