Multi-organ segmentation is a critical task in medical imaging, with wide-ranging applications in both clinical practice and research. Accurate delineation of organs from high-resolution 3D medical images, such as CT scans, is essential for radiation therapy planning, enhancing treatment outcomes, and minimizing radiation toxicity risks. Additionally, it plays a pivotal role in quantitative image analysis, supporting various medical research studies. Despite its significance, manual segmentation of multiple organs from 3D images is labor-intensive and prone to low reproducibility due to high interoperator variability. Recent advancements in deep learning have led to several automated segmentation methods, yet many rely heavily on labeled data and human anatomy expertise. In this study, our primary objective is to address the limitations of existing semi-supervised learning (SSL) methods for abdominal multi-organ segmentation. We aim to introduce a novel SSL approach that leverages unlabeled data to enhance the performance of deep neural networks in segmenting abdominal organs. Specifically, we propose a method that incorporates a redrawing network into the segmentation process to correct errors and improve accuracy. Our proposed method comprises three interconnected neural networks: a segmentation network for image segmentation, a teacher network for consistency regularization, and a redrawing network for object redrawing. During training, the segmentation network undergoes two rounds of optimization: basic training and readjustment. We adopt the Mean-Teacher model as our baseline SSL approach, utilizing labeled and unlabeled data. However, recognizing significant errors in abdominal multi-organ segmentation using this method alone, we introduce the redrawing network to generate redrawn images based on CT scans, preserving original anatomical information. Our approach is grounded in the generative process hypothesis, encompassing segmentation, drawing, and assembling stages. Correct segmentation is crucial for generating accurate images. In the basic training phase, the segmentation network is trained using both labeled and unlabeled data, incorporating consistency learning to ensure consistent predictions before and after perturbations. The readjustment phase focuses on reducing segmentation errors by optimizing the segmentation network parameters based on the differences between redrawn and original CT images. We evaluated our method using two publicly available datasets: the beyond the cranial vault (BTCV) segmentation dataset (training: 44, validation: 6) and the abdominal multi-organ segmentation (AMOS) challenge 2022 dataset (training:138, validation:16). Our results were compared with state-of-the-art SSL methods, including MT and dual-task consistency (DTC), using the Dice similarity coefficient (DSC) as an accuracy metric. On both datasets, our proposed SSL method consistently outperformed other methods, including supervised learning, achieving superior segmentation performance for various abdominal organs. These findings demonstrate the effectiveness of our approach, even with a limited number of labeled data. Our novel semi-supervised learning approach for abdominal multi-organ segmentation addresses the challenges associated with this task. By integrating a redrawing network and leveraging unlabeled data, we achieve remarkable improvements in accuracy. Our method demonstrates superior performance compared to existing SSL and supervised learning methods. This approach holds great promise in enhancing the precision and efficiency of multi-organ segmentation in medical imaging applications.