Abstract

We present a new algorithm to track the amplitude and phase of rotating magnetohydrodynamic (MHD) modes in tokamak plasmas using high speed imaging cameras and deep learning. This algorithm uses a convolutional neural network (CNN) to predict the amplitudes of the n = 1 sine and cosine mode components using solely optical measurements from one or more cameras. The model was trained and tested on an experimental dataset consisting of camera frame images and magnetic-based mode measurements from the High Beta Tokamak - Extended Pulse (HBT-EP) device, and it outperformed other, more conventional, algorithms using identical image inputs. The effect of different input data streams on the accuracy of the model’s predictions is also explored, including using a temporal frame stack or images from two cameras viewing different toroidal regions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call