Abstract

We can imagine the act of musical composition as the selection of elements from several musical parameters. For example, the composer may choose more tonic than dominant harmonies, more quarter notes than half notes, and create a preponderance of conjunct rather than disjunct motions. These choices will bring about distributional characteristics that may belong to a style. Once made, these choices are, at any rate, identifiable characteristics of the music itself. Elements in musical parameters are not unlike characters in common speech alphabets. Communicative structures of substantial size are the end result of a complex series of choices that are selections from alphabetic pools in the case of written literature and, in the case of music, from the pools of elements in the several parameters that together comprise musical expression. The study of the selection and distribution of alphabetic characters is the domain of information theory. More than twenty years ago, Youngblood proposed that the computation of information content, the entropy of information theory, could serve as a method to identify musical style.' The entropy of information theory is a calculation of the freedom with which available alphabetic materials are used. Stated conversely, it

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.