Abstract

Statistical learning is the ability to learn based on transitional probability (TP) in sequential information, which has been considered to contribute to creativity in music. The interdisciplinary theory of statistical learning examines statistical learning as a mechanism of human learning. This study investigated how TP distribution and conditional entropy in TP of the melody and bass line in music interact with each other, using the highest and lowest pitches in Beethoven’s piano sonatas and Johann Sebastian Bach’s Well-Tempered Clavier. Results for the two composers were similar. First, the results detected specific statistical characteristics that are unique to each melody and bass line as well as general statistical characteristics that are shared between the melody and bass line. Additionally, a correlation of the conditional entropies sampled from the TP distribution could be detected between the melody and bass line. This suggests that the variability of entropies interacts between the melody and bass line. In summary, this study suggested that TP distributions and the entropies of the melody and bass line interact with but are partly independent of each other.

Highlights

  • It was hypothesized that there were general statistical characteristics shared between the melody and bass line as well as specific statistical characteristics that were unique to each melody and bass line based on each order model

  • In Study 2, using Johann Sebastian Bach’s Well-Tempered Clavier, BWV 846–893, which has preludes and fugues in all 24 major and minor keys, we investigated the interaction between the zeroth- to fifth-order transitional probability (TP) distributions (Markov models) and the conditional entropies in the melody and bass line

  • It was hypothesized that there were general statistical characteristics shared between the melody and the bass line and between the major and minor keys, as well as specific statistical characteristics that were unique to each melody and bass line and to each major or minor key

Read more

Summary

Introduction

Statistical learning (SL) has been considered a domain-general and implicit learning system that encodes probabilistic distribution of sequential phenomena such as music and language [1,2,3]. The brain’s SL machinery automatically computes transitional probability (TP) distributions of sequences, calculates uncertainty/entropy of the distribution, and predicts a future state based on an internalized statistical model in order to minimize sensory reaction and uncertainty and optimize the efficiency of the prediction. When a brain or a computer encodes the TP distribution of a sequence, it expects a probable future stimulus with a high TP and inhibits the processing loads that will arise in response to predictable states [4][5].

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call