Measurement can affect the people being measured; for example, asking people to complete a questionnaire can result in changes in behaviour (the 'question-behaviour effect'). The usual methods of conduct and analysis of randomised controlled trials implicitly assume that the taking of measurements has no effect on research participants. Changes in measured behaviour and other outcomes due to measurement reactivity may therefore introduce bias in otherwise well-conducted randomised controlled trials, yielding incorrect estimates of intervention effects, including underestimates. The main objectives were (1) to promote awareness of how and where taking measurements can lead to bias and (2) to provide recommendations on how best to avoid or minimise bias due to measurement reactivity in randomised controlled trials of interventions to improve health. We conducted (1) a series of systematic and rapid reviews, (2) a Delphi study and (3) an expert workshop. A protocol paper was published [Miles LM, Elbourne D, Farmer A, Gulliford M, Locock L, McCambridge J, et al. Bias due to MEasurement Reactions In Trials to improve health (MERIT): protocol for research to develop MRC guidance. Trials 2018;19:653]. An updated systematic review examined whether or not measuring participants had an effect on participants' health-related behaviours relative to no-measurement controls. Three new rapid systematic reviews were conducted to identify (1) existing guidance on measurement reactivity, (2) existing systematic reviews of studies that have quantified the effects of measurement on outcomes relating to behaviour and affective outcomes and (3) experimental studies that have investigated the effects of exposure to objective measurements of behaviour on health-related behaviour. The views of 40 experts defined the scope of the recommendations in two waves of data collection during the Delphi procedure. A workshop aimed to produce a set of recommendations that were formed in discussion in groups. Systematic reviews - we identified a total of 43 studies that compared interview or questionnaire measurement with no measurement and these had an overall small effect (standardised mean difference 0.06, 95% confidence interval 0.02 to 0.09; n = 104,096, I2 = 54%). The three rapid systematic reviews identified no existing guidance on measurement reactivity, but we did identify five systematic reviews that quantified the effects of measurement on outcomes (all focused on the question-behaviour effect, with all standardised mean differences in the range of 0.09-0.28) and 16 studies that examined reactive effects of objective measurement of behaviour, with most evidence of reactivity of small effect and short duration. Delphi procedure - substantial agreement was reached on the scope of the present recommendations. Workshop - 14 recommendations and three main aims were produced. The aims were to identify whether or not bias is likely to be a problem for a trial, to decide whether or not to collect further quantitative or qualitative data to inform decisions about if bias is likely to be a problem, and to identify how to design trials to minimise the likelihood of this bias. The main limitation was the shortage of high-quality evidence regarding the extent of measurement reactivity, with some notable exceptions, and the circumstances that are likely to bring it about. We hope that these recommendations will be used to develop new trials that are less likely to be at risk of bias. The greatest need is to increase the number of high-quality primary studies regarding the extent of measurement reactivity. The first systematic review in this study is registered as PROSPERO CRD42018102511. Funded by the Medical Research Council UK and the National Institute for Health Research as part of the Medical Research Council-National Institute for Health Research Methodology Research Programme.