Glioma segmentation is critical for making surgical plans. Recently, the traditional glioma segmentation method is less competitive with two deep learning segmentation strategies: the patch-based method which focuses more on the local feature for each pixel, and the image-based method which fully leverages the global feature and captures the overall shape, size and other characteristics of the lesion in a neighborhood of a pixel. In this study, we investigate and integrate the advantages of 2-D and 3-D image-based architectures, and propose a new convolutional neural network called the Cascaded Hybrid Residual U-Net (CHR-U-Net) for MRI glioma segmentation. The CHR-U-Net exploits both the 2D local features as well as the 3D global spatial contextual information simultaneously. In the first-level of CHR-U-Net, the R-2D-U-Net combines the 2D-U-Net and the residual unit for quick lesion area detecting without any miss. To prevent from missing false-positive pixels, the output of R-2D-U-Net is resampled by using the hard-mining to collect more possible false-positive samples. In the second-level of CHR-U-Net, the axial, coronal, and sagittal 3D-U-Nets are trained to predict whether pixels belong to the area of glioma. The results of three 3D-U-Nets are fused to improve the accuracy and reduce false positives. The database of 2017 BRATS challenge were used in our experiments for the verification. The Dices and Sensitivities of Enhancing, Whole, and Core areas were calculated. The Dices are 0.73, 0.90, and 0.83 and the Sensitivities are 0.83, 0.90, and 0.82, respectively, for the axial, coronal, and sagittal 3D-U-Nets. Experimental results show that the proposed model significantly improves the performance of glioma segmentation.