Acoustic signals are vital in animal communication, and quantifying them is fundamental for understanding animal behaviour and ecology. Vocalizations can be classified into acoustically and functionally or contextually distinct categories, but establishing these categories can be challenging. Newly developed methods, such as machine learning, can provide solutions for classification tasks. The plains zebra is known for its loud and specific vocalizations, yet limited knowledge exists on the structure and information content of its vocalzations. In this study, we employed both feature-based and spectrogram-based algorithms, incorporating supervised and unsupervised machine learning methods to enhance robustness in categorizing zebra vocalization types. Additionally, we implemented a permuted discriminant function analysis to examine the individual identity information contained in the identified vocalization types. The findings revealed at least four distinct vocalization types-the 'snort', the 'soft snort', the 'squeal' and the 'quagga quagga'-with individual differences observed mostly in snorts, and to a lesser extent in squeals. Analyses based on acoustic features outperformed those based on spectrograms, but each excelled in characterizing different vocalization types. We thus recommend the combined use of these two approaches. This study offers valuable insights into plains zebra vocalization, with implications for future comprehensive explorations in animal communication.