Abstract

The consensus of many researchers on data saturation is that data saturation is a key driver for determining the adequacy of sample size in a qualitative case study. Despite these global consensuses, some researchers described data saturation as complex because the decision to stop data collection is solely dictated by the judgment and experience of researchers. Other researchers claimed that guidelines for determining non-probability sample sizes, used as an indication of data saturation are virtually non-existent, problematic, or controversial. Others claimed that data saturation hitched to sample size is practically weak, because data are never truly saturated, as there are always new data to be discovered. This narrative study highlights the dilemma of data saturation and strategies to adequately determine sample size in a qualitative case study. A narrative review of prior research that focused on the vast works of literature that revealed significant information on data saturation and strategies to adequately determine sample size was adopted. Peer-reviewed articles within the last five years from electronic databases, using some keywords such as “qualitative case study”, “sample size in a qualitative case study”, “data saturation”, etc., were also extracted. Results show that data saturation is very helpful especially at the conceptual stage, but its concept and standard is elusive, because it lacks practical guidance for estimating sample size for a robust research prior to data collection. Findings from this study may encourage researcher on better guidelines for determining non-probability sample sizes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call