A key step in learning the violin is mastering control over various bowing techniques since the drawing of the violin bow directly influences the sound quality produced. As it is important for violinists to receive frequent feedback on their bowing motions, there is a need for digital means of providing automated feedback to musicians. This study uses a 60 GHz frequency-modulated-continuous-wave (FMCW) radar to gather data on the violinist’s bowing arm for a total of seven bowing gestures: détaché up, détaché down, spiccato up, spiccato down, staccato up, staccato down, and tremolo. A total of 1200 bowing gestures for 3 different violinists are recorded using radar. The raw signal data from the radar is processed to generate time-Doppler spectrograms of the gestures. Features are extracted from the time-Doppler data using two different methods and fed into machine learning models for automated classification of bowing gestures. The first method involves manually engineering features to be extracted from the signal data matrix. The second method leverages the power of Convolutional Neural Networks (CNNs) to automatically extract features from images of the time-Doppler spectrograms. Comparing model performances reveals that fine-tuning a pre-trained SqueezeNet CNN model yields the highest classification accuracy (95.00%). This study also analyzes the influence of fluctuations in the overall user-to-radar range on the time-Doppler spectrograms produced.