Abstract

Research data is increasingly viewed as an important scholarly output. While a growing body of studies have investigated researcher practices and perceptions related to data sharing, information about data-related practices throughout the research process (including data collection and analysis) remains largely anecdotal. Building on our previous study of data practices in neuroimaging research, we conducted a survey of data management practices in the field of psychology. Our survey included questions about the type(s) of data collected, the tools used for data analysis, practices related to data organization, maintaining documentation, backup procedures, and long-term archiving of research materials. Our results demonstrate the complexity of managing and sharing data in psychology. Data is collected in multifarious forms from human participants, analyzed using a range of software tools, and archived in formats that may become obsolete. As individuals, our participants demonstrated relatively good data management practices, however they also indicated that there was little standardization within their research group. Participants generally indicated that they were willing to change their current practices in light of new technologies, opportunities, or requirements.

Highlights

  • Interrelated issues including an overrepresentation of positive results [1, 2] unreported flexibility in analytical methods [3,4,5], and low levels of statistical power [6, 7] have resulted in uncertainty about the robustness of results described in the psychology literature

  • As researchers in and beyond the field have grappled with these issues, a variety of data stakeholders—including scholarly publishers, research funding agencies, and researchers themselves—have increasingly recognized data as an important research product

  • A total of 274 psychology researchers from 31 countries participated in our survey met our inclusion criteria

Read more

Summary

Introduction

Interrelated issues including an overrepresentation of positive results [1, 2] unreported flexibility in analytical methods [3,4,5], and low levels of statistical power [6, 7] have resulted in uncertainty about the robustness of results described in the psychology literature. The lack of availability of the data and other materials underlying published results has been acknowledged for more than half a century [8,9,10,11,12] In at least one study, researchers reported that they could not make their data available because it was lost, inaccessible, or would require a considerable investment in time to make usable [13]. These results are consistent with complementary work examining barriers for data sharing in psychology [14] and as well as more broadly [15] and are illustrative of an important distinction—if datasets.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call