ABSTRACTUnderstanding users’ whole-body gesture performance quantitatively requires numerical gesture descriptors or features. However, the vast majority of gesture features that have been proposed in the literature were specifically designed for machines to recognize gestures accurately, which makes those features exclusively machine-readable. The complexity of such features makes it difficult for user interface designers, non-experts in machine learning, to understand and use them effectively (see, for instance, the Hu moment statistics or the Histogram of Gradients features), which reduces considerably designers’ available options to describe users’ whole-body gesture performance with legible and easily interpretable numerical measures. To address this problem, we introduce in this work a set of 17 measures that user interface practitioners can readily employ to characterize users’ whole-body gesture performance with human-readable concepts, such as area, volume, or quantity. Our measures describe (1) spatial characteristics of body movement, (2) kinematic performance, and (3) body posture appearance for whole-body gestures. We evaluate our measures on a public dataset composed of 5,654 gestures collected from 30 participants, for which we report several gesture findings, e.g., participants performed body gestures in an average volume of space of 1.0 m3, with an average amount of hands movement of 14.6 m, and a maximum body posture diffusion of 5.8 m. We show the relationship between our gesture measures and recognition rates delivered by a template-based Nearest-Neighbor whole-body gesture classifier implementing the Dynamic Time Warping dissimilarity function. We also release BOGArT, the Body Gesture Analysis Toolkit, that automatically computes our measures. This work will empower researchers and practitioners with new numerical tools to reach a better understanding of how users perform whole-body gestures and thus, to use this knowledge to inform improved designs of whole-body gesture user interfaces.