Abstract
During the past few years, researchers have criticized their professions for providing an entry point for false-positive results arising from publication bias and questionable research practices such as p-hacking (i.e., selectively reporting analyses that yield a p-value below 5 %). Researchers are advocating replication studies and the implementation of open-science practices, like preregistration, in order to identify trustworthy effects. Nevertheless, because such consumer research developments are still emerging, most prior research findings have not been replicated, leaving researchers in the dark as to whether a line of research or a particular effect is trustworthy. We tackle this problem by providing a toolbox containing multiple heuristics to identify data patterns that might, from the information provided in published articles, indicate publication bias and p-hacking. Our toolbox is an easy-to-use instrument with which to initially assess a given set of findings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.