Neural networks are increasingly integral to scientific modeling and a wide range of real-world applications. However, standard neural networks often lack reliable certainty and confidence, and can be poorly calibrated, leading to nondeterministic results. To address these challenges, researchers have focused on understanding and quantifying uncertainty in neural network predictions. This work offers a comprehensive discussion of uncertainty estimation in neural networks, discussing recent advances, current challenges, and future research opportunities. This work explains key sources of uncertainty, challenges from model uncertainty and irreducible data uncertainty. We explore various methods for modeling these uncertainties, such as deterministic neural networks, Bayesian neural networks (BNNs), ensembles, and test-time data augmentation approaches. Additionally, the paper provides practical examples from fields like medical data analysis, robotics, and autonomous driving, illustrating the challenges and requirements associated with uncertainty in real-world applications. We also examine the limitations of current uncertainty quantification methods in safety-critical applications, and provide an outlook on future developments aimed at broader adoption of these methods in diverse domains. This paper serves as a valuable resource for both newcomers and those experienced in the field of uncertainty estimation in neural networks.