Structural break tests are often applied as a pre-step to ensure the validity of subsequent statistical analyses. Without any a priori knowledge of the type of breaks to expect, eye-balling the data can indicate changes in some parameter, e.g., the mean. This, however, can distort the result of a structural break test for that parameter, because the data themselves suggested the hypothesis. In this paper, we formalize the eye-balling procedure and theoretically derive the implied size distortion of the structural break test. We also show that eye-balling a stretch of historical data for possible changes in a parameter does not invalidate the subsequent procedure that monitors for structural change in new incoming observations. An empirical application to Bitcoin returns shows that taking into account the data-dredging bias, which is incurred by looking at the data, can lead to different test decisions.