In recent years, there has been an increasing interest in statistical approaches and tools from machine learning, for the analysis of audio and music signals. The application of statistical techniques is quite natural: Acoustical time series can be conveniently modeled using hierarchical signal models by incorporating prior knowledge from various sources: From physics or studies of human cognition and perception. Once a realistic hierarchical model is constructed, many audio processing tasks, such as coding, restoration, transcription, separation, identification, or resynthesis can be formulated consistently as Bayesian posterior inference problems. This contribution illustrates various realistic generative signal models for audio and music signal analysis. In particular, factorial switching state space models, Gamma‐Markov random fields, and point process models will be discussed. Some models admit exact inference, otherwise, efficient algorithms based on variational or stochastic approximation methods can be developed. We will illustrate the approach on music transcription, restoration, and source separation applications. [Work supported by EPSRC.]