The field of 3D medical image segmentation is witnessing a growing trend in the utilization of combined networks that integrate convolutional neural networks and transformers. Nevertheless, prevailing hybrid networks are confronted with limitations in their straightforward serial or parallel combination methods and lack an effective mechanism to fuse channel and spatial feature attention. To address these limitations, we present a robust multi-scale 3D medical image segmentation network, the Transformer-Driven Pyramid Attention Fusion Network, which is denoted as TPAFNet, leveraging a hybrid structure of CNN and transformer. Within this framework, we exploit the characteristics of atrous convolution to extract multi-scale information effectively, thereby enhancing the encoding results of the transformer. Furthermore, we introduce the TPAF block in the encoder to seamlessly fuse channel and spatial feature attention from multi-scale feature inputs. In contrast to conventional skip connections that simply concatenate or add features, our decoder is enriched with a TPAF connection, elevating the integration of feature attention between low-level and high-level features. Additionally, we propose a low-level encoding shortcut from the original input to the decoder output, preserving more original image features and contributing to enhanced results. Finally, the deep supervision is implemented using a novel CNN-based voxel-wise classifier to facilitate better network convergence. Experimental results demonstrate that TPAFNet significantly outperforms other state-of-the-art networks on two public datasets, indicating that our research can effectively improve the accuracy of medical image segmentation, thereby assisting doctors in making more precise diagnoses.