Abstract

The market and social research industry is increasingly reliant on fast turnaround and efficient online survey research to collect high quality data to inform clients’ information needs. It is frequently acknowledged that high quality data is essential for effective managerial decision-making and one of the main drivers for selecting a supplier. Academic researchers have been critical of survey data from online panels, and they question the lack of quality metrics collected and reported by the industry. One of the main data quality challenges is to reduce measurement error associated with lack of respondent participation and considered response effort. Low respondent participation and response effort in online surveys has been associated with: (1) the respondent characteristics and motivations, (2) the survey environment, including the device used, survey software deployed and visual format of the questions, (3) the survey topic, (4) survey length, and (5) incentives offered. Subgroups of the population are not being represented in online panel samples. Samples from panels also have high non-response or low completion rates. Neither of these factors is random. Both these factors also have major implications for the representativity of the sample collected.The overarching research question in this thesis asks if there are contextual factors that affect respondent participation and response effort that can be used to develop strategies to improve online survey data quality and, if so, what indicators of respondent performance empirically test them? The thesis explores this question guided by a conceptual framework based on cognitive load to assess respondent performance through the application of several cognitive theories that attempt to explain attention to survey stimulus and the decision-making process involved in responding to a survey.Essays from five studies included in this thesis are organised into three sections: (1) problem identification, (2) theoretical contribution, and (3) practical implications and conclusions. Each study is directed by a specific research question, data source and analysis technique to contribute knowledge to different parts of the conceptual framework. The first three studies identify the problems associated with respondent participation and effort, including a systematic bias in completion rates driven by device preference, which in turn is associated with socio-demographic characteristics. In addition, perceived survey duration is associated with satisficing, satisfaction and motivation to participate. However, the format of the survey question is less likely to influence respondent performance. Theoretical contribution is provided in the fourth essay. A model to optimise survey participation is informed by the heterogeneity of panel members and the relationship between ten contextual factors that drive or hinder response propensity. A practical application of how to minimise cognitive load by reducing superfluous survey items in a data-driven way using a new statistical method is provided in the final essay.The four key findings of this research are: (1) perceived survey length is associated with survey burden, (2) satisficing occurs frequently and is often an unconscious heuristic to process information efficiently rather than a conscious decision to minimise cognitive effort, (3) motivations to complete surveys can be based on perceived cognitive load, and (4) simple, short, relevant surveys result in less mental load, lower measurement error and greater respondent satisfaction. The research proposes managerial implications such as the use of a conceptual framework to design and test interventions to improve data quality in online surveys. It establishes a framework for academics and practitioners to collect and report respondent performance data. Performance assessment can be used to test interventions which aim to increase survey participation and predict response propensity. It demonstrates how to reduce superfluous instruction text and survey items to increase responding with considered effort.The conclusions of this research call for participant-centric strategies to engage attention and increase motivation to complete a survey with considered effort. Due to the multiple contextual factors that affect survey participation and response effort, a set of performance indicators should be collected to inform the personalisation of survey design for panel members. The benefit to the market and social research industry is immediately knowing how to gain cooperation from survey participants to complete the survey diligently, which improves online survey data quality.Despite its contributions to theory, method and practice, this thesis is not without its limitations. First, there is a need for sophisticated neuroscience experiments and the use of paradata to fully investigate cognitive processing to inform theory and practice. Second, despite the thesis greatly advancing knowledge about respondents’ cognitive processing behaviour, more attention is needed to empirically appraise the assessment factors in the framework. The popularity of online surveys will continue to grow with new technological solutions, which present opportunities in future for academics who study survey methodology. The use of assessment factors of respondent performance to estimate the measurement error in online surveys by focusing on cognitive load theory is advanced by work in this thesis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call