Noisy intermediate-scale quantum computers (NISQ) are computing hardware in their childhood, but they are showing high promise and growing quickly. They are based on so-called qubits, which are the quantum equivalents of bits. Any given qubit state results in a given probability of observing a value of zero or one in the readout process. One of the main concerns for NISQ machines is the inherent noisiness of qubits, i.e. the observable frequencies of zeros and ones do not correspond to the theoretically expected probability, as the qubit states are subject to random disturbances over time and with each additional algorithmic operation applied to them. Models to describe the influence of this noise exist. In this study, we conduct extensive experiments on quantum noise. Based on our data, we show that existing noise models lack important aspects. Specifically, they fail to properly capture the aggregation of noise effects over time (or over an algorithm’s runtime), and they are underdispersed. With underdispersion, we refer to the fact that observable frequencies scatter much more between repeated experiments than what the standard assumptions of the binomial distribution would allow for. Based on these shortcomings, we develop an extended noise model for the probability distribution of observable frequencies as a function of the number of gate operations. The model roots back to a known continuous random walk on the (Bloch) sphere, where the angular diffusion coefficient can be used to characterize the standard noisiness of gate operations. Here, we superimpose a second random walk at the scale of multiple readouts to account for overdispersion. Further, our model has known, explicit components for noise during state preparation and measurement (SPAM). The interaction of these two random walks predicts theoretical, runtime-dependent bounds for probabilities. Overall, it is a three-parameter distributional model that fits the data much better than the corresponding one-scale model (without overdispersion), and we demonstrate the better fit and the plausibility of the predicted bounds via Bayesian data-model analysis.