The research explores music generation through LSTM and VAEs neural network architectures, leveraging MIDI representations. LSTM specializes in sequential data processing, while VAEs compress music datasets into low-dimensional representations. Optimization of models is pursued by analyzing the relationship between training loss and epochs. Ultimately, a comparison between LSTM and VAEs determines the most effective algorithm for music generation. Traditional methods face issues like vanishing and exploding gradients, prompting exploration of deep learning approaches. The study aims to advance machine manipulation of music, facilitating new composition generation from existing MIDI files. Keywords: Music generation, LSTM, Variational Autoencoders (VAEs), MIDI representation, Sequential data, Neural network architectures, Training loss, Optimization, Deep learning, Vanishing gradient, Exploding gradient