Widespread methods for automatic speech processing have become increasingly powerful, but they do not aim to model aspects of human speech processing. Previous research has shown that individual acoustic cues are important in human speech perception. Developing a model that is capable of identifying individual acoustic cues in speech enhances our ability to extract meaningful information from a speech signal, irrespective of speaker variations or phonemic differences, in a way that provides a transparent and testable model of human speech processing. This research investigates a module for automatic analysis of the spectral burst cue in fricative and plosive speech sounds. The method determines the place of articulation of the spectral burst by utilizing spectral moment measurements near locations where a spectral burst is likely to occur as features in a Gaussian mixture model. This research lays the groundwork for a dynamic model that is developed to be consistent with the Bayesian belief updating framework, in alignment with prior work in human speech perception.