Risk assessment studies of local asteroid hazards traditionally simulate the physics of meteors with engineering models tailored to analyze tens-of-millions of scenarios. However, these simplified approaches still need to solve time-dependent ODEs to model the entry process and the resulting ground damage. With a computational cost of O(0.01 CPU.s) per scenario, simulating these large numbers of potential entry conditions can take several days on local computers. To improve computational efficiency, we propose in this paper an orthogonal approach based on machine learning models to predict the size of damaged areas given a list of entry parameters. We train 5 machine learning methods and compare the predictions to the outputs of NASA’s state-of-the-art Probabilistic Asteroid Impact Risk (PAIR) model. PAIR’s remarkable capability to handle a broad spectrum of entry conditions and a high volume of scenarios establishes it as our reference model, with PAIR estimates assumed ground truth and used to train, test, and validate the machine learning models. The ML models, initially trained with PAIR’s primitive input variables, are then further enhanced with engineered features to improve the performance of interpretable models. We find that complex models like neural networks are well-suited to estimate blast hazards, while simpler, interpretable linear models can accurately assess thermal damage. For both types of hazards, the radii of damaged areas can be predicted with less than 10% average relative errors for radii greater than 100 km. The CPU time is decreased by a factor O(103) compared to the PAIR model, enabling the simulation of millions of scenarios within minutes on a local computer. Beyond serving as surrogate models, we incorporate the machine learning models into a comprehensive Shapley sensitivity analysis, yielding a ranking of entry parameters based on their contributions to ground damages. These results offer valuable insights for prioritizing observation missions.