Speech perception requires the integration of information from multiple phonetic and phonological dimensions. Numerous studies have investigated the mapping between multiple acoustic-phonetic dimensions and single phonological dimensions (e.g., spectral and temporal properties of stop consonants in voicing contrasts). Many fewer studies have addressed relationships between phonological dimensions. Most such studies have focused on the perception of sequences of phones (e.g., bid, bed, bit, and bet), though some have focused on multiple phonological dimensions within phones (e.g., voicing and place of articulation in [p], [b], [t], and [d]). However, strong assumptions about relevant acoustic-phonetic dimensions and/or the nature of perceptual and decisional information integration limit previous findings in important ways. New methodological developments in the general recognition theory framework enable a number of these assumptions to be tested and provide a more complete model of distinct perceptual and decisional processes in speech sound identification. A non-parametric Bayesian analysis of syllable-onset consonant identification data from two experiments indicate that integration of phonological information is partially independent of both perception and decision making for most subjects, and that patterns of independence and interaction vary with the set of phonological dimensions under consideration (e.g., voicing and place of articulation versus voicing and manner of articulation).