This scientific commentary refers to ‘Universal and language-specific sublexical cues in speech perception: a novel electroencephalography-lesion approach’, by Obrig et al. (doi:10.1093/brain/aww077). Understanding spoken language, an activity in which we are engaged for a large part of our waking lives, is generally an effortless, automatic process. This apparent ease stands in contrast to the complexity of the processing required to extract meaning from sound waves. The sound structure of speech is based on multiple frequencies, changing rapidly, requiring an ultrafast analysis over different timescales. In this issue of Brain , Obrig and co-workers provide novel information about a crucial source of cues that help the brain to cope with this formidable challenge, namely the sublexical structure of words (Obrig et al. , 2016). One of the fundamental tenets of modern linguistics is that there is no boundless variation in language structure (Chomsky, 1981; Moro, 2015). While this idea has been particularly developed (and discussed) in the case of syntax, it applies also to the phonological level. Of course, in the latter case there are clear physical determinants of the sounds that the human vocal system can and cannot produce, given its anatomical and physiological features. In addition to these general constraints, there is also evidence for universal preferences in the selection of phonemic sequences. One well-known principle was described by Clements (1990) and is based on the concept of hierarchy in sonority. The ideal syllable has a ‘peak’ in a vowel, with an increase in sonority from the syllable onset to the peak, and then possibly a decrease. It must be specified that in this case, ‘universal’ indicates a general preference, rather than a law. ‘Fra’ is a common syllable across languages; ‘Mzda’ is a violation …