We aim to classify acoustic events recorded by a fiber optic distributed acoustic sensor (DAS). We derived the information from probing the fiber with light pulses and analyzing the Rayleigh backscatter. Then, we processed this data by a pipeline of processing algorithms to form the input for our machine learning classification model. We put random matrix theory to the test to distinguish the acoustic event of interest from the noise. We conditioned the raw trace using moving average and wavelet-based filtering algorithms to improve the signal-to-noise ratio. For raw, low pass, and wavelet-based filtered signals that we inject into a convolutional neural network, we rely on the magnitude of their complex coefficients to categorize the nature of the event. We also investigate Mel-Frequency Cepstral coefficients specific to the event as an input for the classifier and compare their performance to other signal representations. We run the experiments on the CNN for two-class and three-class classification using datasets from a DAS that is deployed for perimeter security and pipeline monitoring. We obtained the best results when using the MFCCs paired with wavelet denoising, achieving accuracies of 96.4% for the “event” class and 99.7% for the “no event” class when it comes to the two-class process. The three-class process yielded optimal accuracies of 83.3%, 81.3%, and 96.7% for the “digging,” “walking,” and “excavation” classes, respectively. Finally, the training execution time is exceptionally long because the extensive dataset and the model’s architecture are complex. As a result, we make efficient use of the CPU and GPU to maximize our machine’s power using the Keras API’s sequence data generator. Compared with the serial implementation, we report an improvement of up to 4.87 times.