Abstract

Good research studies are planned [1], and at some point in the process investigators ask the question, “How many participants do we need?” Researchers should consider this in relation to their specific question. Eng [2], stated that it should be settled before beginning the study. The decision can be made on several considerations, such as resources, practical issues of conducting the research and time constraints [3]. The sample size can also be estimated formally, a process that allows the researcher to achieve both a statistically significant and clinically important result [4]. Sample size estimations are often maligned and misunderstood. They are common in health care disciplines, but not other disciplines [3]. They have their supporters [4, 5], their critics [6, 7] and alternatives have been offered [8, 9]. However, their use is unlikely to disappear [3], and if conducting a randomised controlled trial they are essential [10]. This paper will attempt to illustrate how sample size estimates can be used in certain situations and offer the calculations in their simplest form. So what does a researcher have to do? The determination of sample size has been defined as, “The mathematical process of deciding, before a study begins, how many subjects should be studied [11].” It is the minimum number of participants needed to identify a significant difference, provided it exists [4]. Sample size estimation shares its origins with hypothesis testing [12]. To conduct a sample size estimate, the researcher must decide some parameters by using their expertise of the subject area. These are: the significance level ( ), the power of the study ( ), the difference to be detected and the variability of the measure under observation. In addition they have to decide if they are doing a one sided or a two sided test. In this paper all tests are two sided. Two parameters are often set by convention, significance and power. The significance level ( ) of the study is mostly set at 0.05. It corresponds to the probability of making a type I error, rejecting the null hypothesis when it is true. Controlling for type I errors prevents the rise of ineffective treatments [13]. The power of a study is usually set at 0.8. The power of a study refers to the chance of accepting the null hypothesis when it is false (Type II error) [14]. Biau et al. [13] referred to type I errors as false positives and type II errors as false negatives. They also emphasised the importance of sample size estimates as the sample size determines the risk of false negative results. The researcher also needs to estimate the minimum expected difference between the two groups being investigated. This is a subjective parameter and it will be based on clinical judgements and the expertise of the researchers [2]. Lastly, the variability of the outcome measure under investigation must be determined. For a continuous variable (interval/ratio), a standard deviation is required. It is unlikely to be available prior to the study [13], so it is usually determined from preliminary data or a review of the literature can provide estimates [2]. The actual calculations for sample sizes can be mystifying and frightening when first encountered (Equation 1) [13]. However, investigators have produced other methods that are friendlier, which are presented as equations, tables [15] or nomograms [16, 17]. Perhaps Lehr [18] offers the easiest way to estimate the number of subjects per group at p< 0.05 and power = 80% (Equation 2). Where s is the standard deviation and d is the difference to be detected. The

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call