Abstract The fields of neuroscience and psychology are currently in the midst of a so-called “reproducibility crisis”, with growing concerns regarding a history of weak effect sizes and low statistical power in much of the research published in these fields over the last few decades. Whilst the traditional approach for addressing this criticism has been to increase participant sample sizes, there are many research contexts in which the number of trials per participant may be of equal importance. The present study aimed to compare the relative importance of participants and trials in the detection of phase-dependent phenomena, which are measured across a range of neuroscientific contexts (e.g., neural oscillations, non-invasive brain stimulation). This was achievable within a simulated environment where one can manipulate the strength of this phase-dependency in two types of outcome variables: one with normally distributed residuals (idealistic) and one comparable to motor-evoked potentials (an MEP-like variable). We compared the statistical power across thousands of experiments with the same number of sessions per experiment but with different proportions of participants and number of sessions per participant (30 participants ⨉ 1 session, 15 participants ⨉ 2 sessions, and 10 participants ⨉ 3 sessions), with the trials being pooled across sessions for each participant. These simulations were performed for both outcome variables (idealistic and MEP-like) and four different effect sizes (0.075— ‘weak’, 0.1— ‘moderate’, 0.125— ‘strong’, 0.15— ‘very strong’), as well as separate control scenarios with no true effect. Across all scenarios with (true) discoverable effects, and for both outcome types, there was a statistical benefit for experiments maximising the number of trials rather than the number of participants (i.e., it was always beneficial to recruit fewer participants but have them complete more trials). These findings emphasise the importance of obtaining sufficient individual-level data rather than simply increasing participant numbers.