Abstract

AbstractIn late 2018, the Journal of Operations Management published an invited methods article by Lonati et al. (2018) to provide guidance to authors on how to design behavioral experiments to achieve the rigor required for consideration in the journal. That article was written as a response to a number of behavioral research submissions to JOM, each dealing with interesting topics but viewed by the editors to possess poor design choices at inception. While the Lonati et al. (2018) piece provides experimental guidance fitting to certain research agendas, questions have arisen concerning whether and how exactly to implement some of the points that it makes, and how to best address trade‐offs in the design of behavioral experiments. Questions have also arisen concerning how to apply these concepts in operations management research. This technical note seeks to address these questions, by diving into the details of research risks and trade‐offs regarding demand effects, incentives, deception, sample selection, and context‐rich vignettes. The authors would like to recognize the input of a large number of senior scholars in the JOM community who have provided support and feedback as we have sought to help authors tease out what can reasonably be done in designing strong behavioral experiments that fit various research agendas.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.