This study investigated the effects of measurement error and testing frequency on prediction accuracy of the standard fitness-fatigue model. A simulation-based approach was used to systematically assess measurement error and frequency inputs commonly used when monitoring the training of athletes. Two hypothetical athletes (intermediate and advanced) were developed and realistic training loads and daily ‘true’ power values were generated using the fitness-fatigue model across 16 weeks. Simulations were then completed by adding Gaussian measurement errors to true values with mean 0 and set standard deviations to recreate more and less reliable measurement practices used in real-world settings. Errors were added to the model training phase (weeks 1–8) and sampling of data was used to recreate different testing frequencies (every day to once per week) when obtaining parameter estimates. In total, 210 sets of simulations (N = 104 iterations) were completed using an iterative hill-climbing optimisation technique. Parameter estimates were then combined with training loads in the model testing phase (weeks 9–16) to quantify prediction errors. Regression analyses identified positive associations between prediction errors and the linear combination of measurement error and testing frequency ([Formula: see text]=0.87–0.94). Significant model improvements (P < 0.001) were obtained across all scenarios by including an interaction term demonstrating greater deleterious effects of measurement error at low testing frequencies.The findings of this simulation study represent a lower-bound case and indicate that in real-world settings, where a fitness-fatigue model is used to predict training response, measurement practices that generate coefficients of variation greater than [Formula: see text]4% will not provide satisfactory results.