Airtime is crucial for high-rotation tricks in snowboard halfpipe performance, significantly impacting trick difficulty, the primary judging criterion. This study aims to enhance the detection of take-off and landing events using inertial measurement unit (IMU) data in conjunction with machine learning algorithms since manual video-based methods are too time-consuming. Eight elite German National Team snowboarders performed 626 halfpipe tricks, recorded by two IMUs at the lateral lower legs and a video camera. The IMU data, synchronized with video, were labeled manually and segmented for analysis. Utilizing a 1D U-Net convolutional neural network (CNN), we achieved superior performance in all of our experiments, establishing new benchmarks for this binary segmentation task. In our extensive experiments, we achieved an 80.34% lower mean Hausdorff distance for unseen runs compared with the threshold approach when placed solely on the left lower leg. Using both left and right IMUs further improved performance (83.37% lower mean Hausdorff). For data from an algorithm-unknown athlete (Zero-Shot segmentation), the U-Net outperformed the threshold algorithm by 67.58%, and fine-tuning on athlete-specific (Few-Shot segmentation) runs improved the lower mean Hausdorff to 78.68%. The fine-tuned model detected takeoffs with median deviations of 0.008 s (IQR 0.030 s), landing deviations of 0.005 s (IQR 0.020 s), and airtime deviations of 0.000 s (IQR 0.027 s). These advancements facilitate real-time feedback and detailed biomechanical analysis, enhancing performance and trick execution, particularly during critical events, such as take-off and landing, where precise time-domain localization is crucial for providing accurate feedback to coaches and athletes.