Passive acoustic monitoring is a promising tool for monitoring at-risk populations of vocal species, yet, extracting relevant information from large acoustic datasets can be time-consuming, creating a bottleneck at the point of analysis. To address this, an open-source framework for deep learning in bioacoustics to automatically detect Bornean white-bearded gibbon (Hylobates albibarbis) "great call" vocalizations in a long-term acoustic dataset from a rainforest location in Borneo is adapted. The steps involved in developing this solution are described, including collecting audio recordings, developing training and testing datasets, training neural network models, and evaluating model performance. The best model performed at a satisfactory level (F score = 0.87), identifying 98% of the highest-quality calls from 90 h of manually annotated audio recordings and greatly reduced analysis times when compared to a human observer. No significant difference was found in the temporal distribution of great call detections between the manual annotations and the model's output. Future work should seek to apply this model to long-term acoustic datasets to understand spatiotemporal variations in H. albibarbis' calling activity. Overall, a roadmap is presented for applying deep learning to identify the vocalizations of species of interest, which can be adapted for monitoring other endangered vocalizing species.