Abstract

The development of information technology (IT) in the last decade has ushered us into the ‘big data’ era. Whereas sampling error has been of high interest in such studies over the last century, extremely large datasets will virtually rule out the possibility that statistics generated from samples are not also true of the population. The critical discussion must move on to how much the exogenous and endogenous variables explain ultimate outcomes. The stress will have to move from generalizability to explanatory power. By examining the three common statistical tests – the T-value, the F-value and the beta coefficient (or ‘b’ in the sample) – this paper aims to show how important effect size analysis is with the trend toward larger and larger samples and big data. In the end, the paper points out the gap of what the acceptable ranges of effects that appears in our sundry scientific literatures are. We call for studies in this regard so that social sciences can move forward and strengthen weak theory bases, and recognize strong theories when they do appear.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.