Abstract

Modern deep neural network training is based on mini-batch stochastic gradient optimization. While using extensive mini-batches improves the computational parallelism, the small batch training proved that it delivers improved generalization performance and allows a significantly smaller memory, which might also improve machine throughput. However, mini-batch size and characteristics, a key factor for training deep neural networks, has not been sufficiently investigated in training correlated group features and looping with highly complex ones. In addition, the unsupervised learning method clusters the data into different groups with similar properties to make the training process more stable and faster. Then, the supervised learning algorithm was applied with the cluster repeated mini-batch training (CRMT) methods. The CRMT algorithm changed the random minibatch characteristics in the training step into training in order of clusters. Specifically, the self-organizing maps (SOM) were used to cluster the information into n groups based on the dataset's labels Then, neural network models (ANN) were trained with each cluster using the cluster repeated mini-batch training method. Experiments conducted on EEG datasets demonstrate the survey of the proposed method and optimize it. In addition, the results in our research outperform other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call