Continuous capnography monitors patient ventilation but can be susceptible to artifact, resulting in alarm fatigue. Development of smart algorithms may facilitate accurate detection of abnormal ventilation, allowing intervention before patient deterioration. The objective of this analysis was to use machine learning (ML) to classify combined waveforms of continuous capnography and pulse oximetry as normal or abnormal. We used data collected during the observational, prospective PRODIGY trial, in which patients receiving parenteral opioids underwent continuous capnography and pulse oximetry monitoring while on the general care floor [1]. Abnormal ventilation segments in the data stream were reviewed by nine experts and inter-rater agreement was assessed. Abnormal segments were defined as the time series 60s before and 30s after an abnormal pattern was detected. Normal segments (90s continuous monitoring) were randomly sampled and filtered to discard sequences with missing values. Five ML models were trained on extracted features and optimized towards an Fβ score with β = 2. The results show a high inter-rater agreement (> 87%), allowing 7,858 sequences (2,944 abnormal) to be used for model development. Data were divided into 80% training and 20% test sequences. The XGBoost model had the highest Fβ score of 0.94 (with β = 2), showcasing an impressive recall of 0.98 against a precision of 0.83. This study presents a promising advancement in respiratory monitoring, focusing on reducing false alarms and enhancing accuracy of alarm systems. Our algorithm reliably distinguishes normal from abnormal waveforms. More research is needed to define patterns to distinguish abnormal ventilation from artifacts.