Objective. This study aims to leverage a deep learning approach, specifically a deformable convolutional layer, for staging cervical cancer using multi-sequence MRI images. This is in response to the challenges doctors face in simultaneously identifying multiple sequences, a task that computer-aided diagnosis systems can potentially improve due to their vast information storage capabilities. Approach. To address the challenge of limited sample sizes, we introduce a sequence enhancement strategy to diversify samples and mitigate overfitting. We propose a novel deformable ConvLSTM module that integrates a deformable mechanism with ConvLSTM, enabling the model to adapt to data with varying structures. Furthermore, we introduce the deformable multi-sequence guidance model (DMGM) as an auxiliary diagnostic tool for cervical cancer staging. Main results. Through extensive testing, including comparative and ablation studies, we validate the effectiveness of the deformable ConvLSTM module and the DMGM. Our findings highlight the model’s ability to adapt to the deformation mechanism and address the challenges in cervical cancer tumor staging, thereby overcoming the overfitting issue and ensuring the synchronization of asynchronous scan sequences. The research also utilized the multi-modal data from BraTS 2019 as an external test dataset to validate the effectiveness of the proposed methodology presented in this study. Significance. The DMGM represents the first deep learning model to analyze multiple MRI sequences for cervical cancer, demonstrating strong generalization capabilities and effective staging in small dataset scenarios. This has significant implications for both deep learning applications and medical diagnostics. The source code will be made available subsequently.