To improve the uncertainty quantification of variance networks, we propose a novel tree-structured local neural network model that partitions the feature space into multiple regions based on uncertainty heterogeneity. A tree is built upon giving the training data, whose leaf nodes represent different regions where region-specific neural networks are trained to predict both the mean and the variance for quantifying uncertainty. The proposed uncertainty-splitting neural regression tree (USNRT) employs novel splitting criteria. At each node, a neural network is trained on the full data first, and a statistical test for the residuals is conducted to find the best split, corresponding to the two subregions with the most significant uncertainty heterogeneity between them. USNRT is computationally friendly, because very few leaf nodes are sufficient and pruning is unnecessary. Furthermore, an ensemble version can be easily constructed to estimate the total uncertainty, including the aleatory and epistemic. On extensive UCI datasets, USNRT or its ensemble shows superior performance compared to some recent popular methods for quantifying uncertainty with variances. Through comprehensive visualization and analysis, we uncover how USNRT works and show its merits, revealing that uncertainty heterogeneity does exist in many datasets and can be learned by USNRT.