Artificial neural networks (ANN), inspired by biological neural networks and the human brain, have attracted neuroscience researchers’ interest to simulate cognitive processes in recent years. However, the learning mechanisms inside the artificial neural networks are still unclear. Time series analysis of the weights of a network during the training stage can illustrate the dynamics behind the weights and provide new insight into the learning process.In this study, we investigate the dynamics of the weights of a three-neuron feed-forward artificial neural network based on weights recursive maps and time series. The dynamics of the weights are analyzed from the perspective of chaos theory fundamentals such as bifurcation analysis, equilibrium point analysis, and coexisting attractor emergence. While the previous works only examine the effect of in-network factors on the weights dynamics, here we investigate both in-network and out-of-network factors, including parameters, network structure, network input and output, bias, and activation function. The results show that weights time series exhibit rich nonlinear dynamics, including intermittency and chaos, even in the simplest structures of the network. Bifurcation analysis shows the emergence of several coexisting attractors, meaning that the learning process in an artificial neural network, and maybe in the human brain, takes place on different attractors. Hence, this is the initial condition of the brain that determines in which attractor the learning should occur.For the first time, using the recursive map of the weights, we introduce a novel discrete neuronal model with learning capability, which maps an input to an output. This ability distinguishes our model from the previous ones that only simulate the neuron electrical voltage without any learning process inside. This model can simulate cognitive phenomena such as neuronal synchronization in near-realistic condition during the training stage.