ABSTRACT Evaluative tools for qualitative research need to be developed and designed in a way that allows them to be used by the research community to assess qualitative research on its own terms, and thus strengthen, rather than undermine, research quality. The diversity of qualitative research practice makes the development of ‘one size fits all’ tools challenging. When evaluating Big Q Qualitative – the use of practices for generating and analysing qualitative data underpinned by qualitative research values – many existing ‘one size fits all’ reporting checklists and standards have the potential to introduce methodological incongruence through their inclusion of criteria that don’t ‘fit’ or align with Big Q values. The values and practices of Big Q Qualitative research, and the paradigms and meta-theoretical assumptions that inform them, are typically incommensurable with ideas and ideals founded in, disciplinary dominant, (post)positivism/objectivism and scientific realism. The unknowing, or knowing-but-required, application of ill-fitting criteria and standards for reporting risks not just incongruence, but undermining the vitality and creativity of Big Q Qualitative. However, evaluative guidelines remain important tools, pragmatically and rhetorically. In this paper, we explain and justify our development of a set of reporting guidelines to support methodologically congruent and reflexively open evaluation and reporting of Big Q Qualitative research. The Big Q Qualitative Reporting Guidelines (BQQRG) articulate a values-, rather than consensus-, based framework for reporting and evaluating qualitative research.
Read full abstract