Unlike non-human animal studies that have progressively demonstrated the advantages of being asymmetrical at an individual, group and population level, human studies show a quite inconsistent picture. Specifically, it is hardly clear if and how the strength of lateralization that an individual is equipped with relates to their cognitive performance. While some of these inconsistencies can be attributed to procedural and conceptual differences, the issue is aggravated by the fact that the intrinsic mathematical interdependence of the measures of laterality and performance produces spurious correlations that can be mistaken for evidence of an adaptive advantage of asymmetry. Leask and Crow [Leask, S. J., & Crow, T. J. (1997), How far does the brain lateralize?: an unbiased method for determining the optimum degree of hemispheric specialization. Neuropsychologia, 35(10), 1381–1387] devised a method of overcoming this problem that has been subsequently used in several large-sample studies investigating the asymmetry–performance relationship. In our paper we show that the original Leask and Crow method and its later variants fall victim to inherent nonlinear dependencies and produce artifacts. By applying the Leask and Crow method to random data and with mathematical analysis, we demonstrate that what has been believed to describe the true asymmetry–performance relation in fact only reflects the idiosyncrasies of the method itself. We think that the approach taken by Leask in his later paper [Leask, S. (2003), Principal curve analysis avoids assumptions of dependence between measures of hand skill. Laterality, 8(4), 307–316. doi:10.1080/13576500342000004] might be preferable.
Read full abstract