Abstract
Machine learning and deep learning algorithms produce very different results with different examples of their hyperparameters. Algorithm parameters require optimization because they aren't specific for all problems. In this paper Long Short-Term Memory (LSTM), eight different hyperparameters (go-backward, epoch, batch size, dropout, activation function, optimizer, learning rate and, number of layers) were used to examine to daily and hourly Bitcoin datasets. The effects of each parameter on the daily dataset on the results were evaluated and explained These parameters were examined with hparam properties of Tensorboard. As a result, it was seen that examining all combinations of parameters with hparam produced the best test Mean Square Error (MSE) values with hourly dataset 0.000043633 and daily dataset 0.00073843. Both datasets produced better results with the tanh activation function. Finally, when the results are interpreted, the daily dataset produces better results with a small learning rate and small dropout values, whereas the hourly dataset produces better results with a large learning rate and large dropout values.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Sakarya University Journal of Computer and Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.