Accurate segmentation of rectal cancer tumor and rectum in magnetic resonance imaging (MRI) is significant for tumor precise diagnosis and treatment plans determination. Variable shapes and unclear boundaries of rectal tumors make this task particularly challenging. Only a few studies have explored deep learning networks in rectal tumor segmentation, which mainly adopt the classical encoder-decoder structure. The frequent downsampling operations during feature extraction result in the loss of detailed information, limiting the network's ability to precisely capture the shape and boundary of rectal tumors. This paper proposes a Reconstruction-regularized Parallel Decoder network (RPDNet) to address the problem of information loss and obtain accurate co-segmentation results of both rectal tumor and rectum. RPDNet initially establishes a shared encoder and parallel decoders framework to fully utilize the common knowledge between two segmentation labels while reducing the number of network parameters. An auxiliary reconstruction branch is subsequently introduced by calculating the consistency loss between the reconstructed and input images to preserve sufficient anatomical structure information. Moreover, a non-parameter target-adaptive attention module is proposed to distinguish the unclear boundary by enhancing the feature-level contrast between rectal tumors and normal tissues. The experimental results indicate that the proposed method outperforms state-of-the-art approaches in rectal tumor and rectum segmentation tasks, with Dice coefficients of 84.91 % and 90.36 %, respectively, demonstrating its potential application value in clinical practice.