Many journal pages (in this and other journals) have been filled with discussions, techniques and admonitions to share data. While it is true that enhanced data sharing is a critical (and lagging) step in the quest for more open, reliable, and reproducible science, the battle to reduce the social and ethical barriers remains a topic of much discussion. While the social and ethical discussion wages on, the loss of data and the potential for new insights that can be derived from it mounts. Through data sharing efforts that have been undertaken, we have learned that the process of sharing often uncovers errors in the descriptive details of data. When data sharing is prospective (relative to when the results of the data may be published), these errors can be due to accidental mistakes in record keeping, misalignment of data from multiples sources, etc. Retrospective (post-publication) data sharing can compound the potential local errors in the actual data descriptors with additional errors in identification of the final specific data that was utilized in any specific publication. With the passage of time, and the inevitable changes in hardware, software, personnel, etc., it becomes difficult to reconstruct exactly what the data was that was included in a particular past publication. Data quality, for both local and shared purposes, is enhanced through the use of high-quality enduring lab notebook and local data management systems. One of the ‘benefits’ of data sharing is the identification and correction of potential errors in the data descriptions relative to the published data. By extrapolation from the errors that have been detected in the limited existing data sharing efforts, one must assume that there are numerous errors that have occurred in the reporting of data description in the published literature. For the most part, however, the extent, nature and prevalence of these errors are unknown. While often they may be innocuous with respect to the scientific conclusions derived from the data, the impact of such errors place extra burden on the ability of studies to be replicated, and hamper data aggregation for meta analyses. Therefore, one way to avoid accidental data description error is to undertake the process of preparing the raw data of a publication for data archive/data sharing at the time of publication. There is a societal benefit (improved accuracy of published data description) that is generated by the preparation for sharing process even if the data is never actually shared with any outside user. Conceptually, the preparation for sharing offers an additional cross-validation step between the published description and what has been prepared for archive. This is analogous to the rational for why ‘double entry’ is standard process in many critical data acquisition procedures. Even if compelling arguments can be made for why a specific dataset should not be publically shared at the time of publication, it is possible that future advances in the technology, societal pressures, or ethical considerations may provide a solution to make sharing of these data possible. As prospective data sharing is both less error prone in the first 1 D. N. Kennedy, “Share and share alike,” Neuroinformatics 1, 211 (2003); D. N. Kennedy, “Barriers to the socialization of information,” Neuroinformatics 2, 367 (2004); D. Kennedy, “Where’s the beef? Missing data in the information age,” Neuroinformatics 4, 271 (2006). 2 J. B. Poline et al., “Data sharing in neuroimaging research,” Front. Neuroinform 6, 9 (2012).