Abstract

AbstractCollecting large‐scale comparative management data from multiple countries poses challenges in demonstrating methodological rigour, including the need for representativeness. We examine the rigour of sample representativeness, the counterbalancing effect of sample relevance, and explore sampling options, equivalence across countries, data collection procedures and response rates. We identify the challenges posed by cross‐national survey data collection, and suggest that the ideal research designs presented in much of the literature might not be practical or desirable in large‐scale, multi‐time‐point, cross‐national comparative management studies because of the need to ensure relevance across such contexts. Using the example of Cranet – a large‐scale, multi‐time‐point, cross‐national survey of human resource management – we offer suggested solutions for balancing both rigour and relevance in research of this nature.

Highlights

  • There is a long tradition in management research of comparing countries, with country or national context recognized as a powerful explanatory variable in developing our understanding of complex management phenomena

  • Accompanying this growth in international comparative management research is an increasing awareness of the many conceptual, collaborative, methodological and analytical challenges that arise when simultaneously collecting comparable data across multiple countries. This additional complexity is vested largely in ‘the variability across nations of various constraining factors’ (Lynn, 2003, p. 323) that would be considered fixed in single-country studies. Several of these challenges are evident in the shortcomings that, arguably, extant international comparative management research often includes: a reliance on untested secondary data that have often been collected for different purposes in different countries; underscored by a largely positivist tradition and an over-reliance on quantitative methods; a preponderance of managerial respondents with far fewer studies canvassing other stakeholders, raising the spectre of common method bias; too few longitudinal studies; a gap between academic research and what is needed or understood by management practitioners; and generally weak explanations of observed differences and similarities (Cascio, 2012; Cheng, 2007; Chidlow et al, 2015; Clark, Gospel and Montgomery, 1999; Doty and Glick, 1998; Podsakoff et al, 2003; Romani et al, 2018; Starkey and Madan, 2001; Yang, Wang and Su, 2000)

  • It is impossible to be completely confident about who answers the questionnaire, but by providing guidance that the survey should be steered towards those individuals who have the most knowledge of the phenomenon of interest, and by asking only factual questions of these individuals, we argue that researchers can collect reliable data without increasing the required resources or damaging response rates

Read more

Summary

Introduction

There is a long tradition in management research of comparing countries, with country or national context recognized as a powerful explanatory variable in developing our understanding of complex management phenomena. Accompanying this growth in international (i.e. multi-country) comparative management research is an increasing awareness of the many conceptual, collaborative, methodological and analytical challenges that arise when simultaneously collecting comparable data across multiple countries This additional complexity is vested largely in ‘the variability across nations of various constraining factors’ Researchers need to exercise a degree of pragmatism about what will and will not work methodologically With this in mind, we focus here on issues of sample selection and data collection in comparative multi-time point, multi-country surveys, which pose methodological and resourcing challenges for scholars wishing to landscape the contours of management systems in different countries through engaging in large-scale original empirical work. For pragmatic reasons, large-scale comparative surveys usually use non-probability sampling This is the case with Cranet, where complete databases of organizations and their HRM contacts are not available in all the countries included in the research. It is impossible to be completely confident about who answers the questionnaire, but by providing guidance that the survey should be steered towards those individuals who have the most knowledge of the phenomenon of interest (in this case HRM policies and practices), and by asking only factual questions of these individuals, we argue that researchers can collect reliable data without increasing the required resources or damaging response rates

Response rates
Equivalence across countries
Data collection procedures
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call