Scenario-based model predictive control (MPC) methods introduce recourse into optimal control and can thus reduce the conservativeness inherent to open-loop robust MPC. However, the uncertainty scenarios are often generated offline using worst-case uncertainty bounds quantified a priori, limiting the potential gains in control performance. This paper presents a learning-based multistage MPC (msMPC) for systems with hard-to-model dynamics and time-varying plant-model mismatch. Gaussian Processes (GP) are used to learn state- and input-dependent plant-model mismatch in real-time and accordingly adapt the scenario tree online. Due to the increased computational complexity associated with incorporating the GP predictions into the optimal control problem, the learning-based msMPC (LB-msMPC) law is approximated by a deep neural network (DNN) that is cheap-to-evaluate online and has a small memory footprint, which makes it suitable for embedded applications. In addition, we present a novel algorithm for training the DNN-based controller that uses a GP description of the plant-model mismatch to generate closed-loop simulation data, which ensures the LB-msMPC law is evaluated in regions of the state space most relevant to closed-loop operation. The proposed LB-msMPC strategy is demonstrated on a cold atmospheric plasma jet with applications in (bio)materials processing. The simulation results indicate the promise of the approximate LB-msMPC strategy for control of hard-to-model systems with fast dynamics on millisecond timescales.