Abstract Against the backdrop of the sociolinguistic-typological complexity debate which is all about measuring, comparing and explaining language complexity, this article investigates how Kolmogorov-based information theoretic complexity relates to linguistic structures. Specifically, the linguistic structure of text which has been compressed with the text compression algorithm gzip will be analysed. One implementation of Kolmogorov-based language complexity is the compression technique (Ehret, Katharina. 2021. An information-theoretic view on language complexity and register variation: Compressing naturalistic corpus data. Corpus Linguistics and Linguistic Theory (2). 383–410) which employs gzip to measure language complexity in naturalistic text samples. In order to determine what type of structures compression algorithms like gzip capture, and how these compressed strings relate to linguistically meaningful structures, gzip’s lexicon output is retrieved and subjected to an in-depth analysis. As a case study, the compression technique is applied to the English version of Lewis Carroll’s Alice’s Adventures in Wonderland and its lexicon output is extracted. The results show that gzip-like algorithms sometimes capture linguistically meaningful structures which coincide, for instance, with lexical words or suffixes. However, many compressed sequences are linguistically unintelligible or simply do not coincide with any linguistically meaningful structures. Compression algorithms like gzip thus crucially capture purely formal structural regularities. As a consequence, information theoretic complexity, in this context, is a linguistically agnostic, purely structural measure of regularity and redundancy in texts.