Reinforcement learning technique was developed recently as an interesting topic in designing adaptive optimal controllers. This technique explicitly provided a feasible solution to circumvent the “curse of dimensionality” and requiring a system model inherent in the classical dynamic programming algorithm. By virtue of this property, in our work, by introducing this technique into a predictor-based online adaptive neural dynamic surface predictive control architecture, we concentrate on a novel robust predictive control framework subject to system uncertainties. To be specific, in this presented framework, an adaptive dynamic programming control strategy utilizing a critic neural network point of view is developed to learn the optimal control policy. Our modification is able to facilitate the alleviation of performance deterioration caused by system uncertainties and enable the smooth and fast learning, while keeping the merits of the finite control-set model predictive control. Finally, the interest and applicability of the proposed control methodology are verified by performance evaluation.