Several studies have demonstrated that response times in natural catchments decrease with increasing rainfall intensity. Consequently, event-based estimations of catchment response times are of paramount importance in applied hydrology. Specifically, they have the potential to address a major inconsistency in the use of empirical formulas. These formulas often assume response times as constant parameters, regardless of whether extreme or frequent flood events are considered, thus neglecting the role of flow velocities.In this paper, built upon previous approaches developed and/or analyzed by the authors, two different recent methods for event-based estimations of catchment response times are critically reviewed, and their predictive performances are compared. First, four “physically-based” formulas, calibrated using synthetic rainfalls in three small Italian watersheds to reproduce the results of a two-dimensional hydrodynamic-based rainfall/runoff model and, consequently, the simulated wave celerities, are considered. Then, the detrending moving-average cross-correlation analysis (DMCA) has been applied to assess the average time elapsed between the centroids of precipitation and discharge time series.The soundness of these two approaches is initially assessed based on their ability to reproduce estimated lag times from observations. Their robustness is further evaluated by analyzing the magnitude and basin scale dependence of the inferred velocities compared to observed values, following a recent approach proposed in the literature. These issues are discussed with reference to 60 rainfall-runoff events occurring across 27 watersheds in Hungary and Italy, which possess substantially different geomorphic and climatic features, highlighting both the potential and the need for further improvements. Both approaches give error rates of around 37% for the proposed dataset.