Deep neural networks (DNNs) are often coupled with physics-based and data-driven models to perform fault detection and health monitoring. The system models serve as digital surrogates that generate large quantities of data for training DNNs which would otherwise be difficult to obtain from the real-life system. In such a scenario, the uncertainty in the system model and in the DNN parameters will influence the predictions of the DNN. Here, we quantify the impact of this uncertainty on the performance of DNNs. The uncertainty from the system model is captured with two methods, namely assumed density filtering and heteroskedastic modelling. In addition to quantification, these methods allow training DNNs in an uncertainty-aware manner. The uncertainty in the DNN parameters is captured with Monte Carlo dropout. The proposed approach is demonstrated for fault diagnosis of electric power lines. Data generated from a physics-based model calibrated with real-life measurements is used to train three neural network architectures for fault diagnosis. The results reveal that uncertainty-aware models can provide 1% to 19% improvement in classification accuracy than their deterministic counterparts. The uncertainty-aware models also exhibit better robustness to uncertainty and, thus, offer more reliable models for deployment. Remarkably, the article provides a system-agnostic framework for uncertainty-aware training of DNN models for fault diagnosis and monitoring that explicitly accounts for the synthetic nature of training data.