Falls are the contributing factor to both fatal and nonfatal injuries in the elderly. Therefore, pre-impact fall detection, which identifies a fall before the body collides with the floor, would be essential. Recently, researchers have turned their attention from post-impact fall detection to pre-impact fall detection. Pre-impact fall detection solutions typically use either a threshold-based or machine learning-based approach, although the threshold value would be difficult to accurately determine in threshold-based methods. Moreover, while additional features could sometimes assist in categorizing falls and non-falls more precisely, the estimated determination of the significant features would be too time-intensive, thus using a significant portion of the algorithm’s operating time. In this work, we developed a deep residual network with aggregation transformation called FDSNeXt for a pre-impact fall detection approach employing wearable inertial sensors. The proposed network was introduced to address the limitations of feature extraction, threshold definition, and algorithm complexity. After training on a large-scale motion dataset, the KFall dataset, and straightforward evaluation with standard metrics, the proposed approach identified pre-impact and impact falls with high accuracy of 91.87 and 92.52%, respectively. In addition, we have investigated fall detection’s performances of three state-of-the-art deep learning models such as a convolutional neural network (CNN), a long short-term memory neural network (LSTM), and a hybrid model (CNN-LSTM). The experimental results showed that the proposed FDSNeXt model outperformed these deep learning models (CNN, LSTM, and CNN-LSTM) with significant improvements.