Film cooling is a typical three-dimensional fluid phenomenon, where the coolant with lower temperature is ejected from discrete holes to protect metal walls from being burnt by the hot mainstream. It is a great challenge for Reynolds-averaged Navier–Stokes (RANS) methods to accurately predict the coolant coverage on the wall because the turbulent thermal diffusion tends to be under-predicted due to inherent assumptions behind RANS models. In this paper, a framework of integrated field inversion and machine learning is built to enhance RANS prediction of turbulent thermal diffusion. A neural network (NN) is trained in this framework to predict the spatially varying turbulent Prandtl number (Prt) and to improve the prediction of RANS models. The temperature distribution obtained from the large eddy simulation is used as the learning target, and the discrete adjoint method is used as the inverse model that helps calculate derivatives of mean square error of the temperature distribution to NN parameters. The training process of NN shows good convergence properties. The results show that the obtained NN effectively increases the insufficient turbulent thermal diffusion by predicting much lower Prt than the commonly used value of 0.9. The NN-enhanced RANS provides significant improvements on predicting experimental temperature distributions compared with general RANS models not only on the training data but also on the unseen testing data. In addition, the obtained NN can be implemented into general-purpose software with minimal effort and no numerical stability problem.