In a nonspeech communication, the pitch, duration, formants arrangement, and other acoustic features are bearers of coded information. Certain meanings are assigned to them by convention. The processing of nonspeech signals, consists in distinguishing and, subsequently, interpreting them. Speech oral messages are not, by contrast, amenable to such one-average ‘‘technology.’’ Each oral utterance betrays a two-layer organization. Here, true information carriers are coded articulary changes within the speech channel. Such gestures may be successfully ‘‘read’’ directly in TADOMA communication (by the hand placed against the face of the speaker), or in the ‘‘inner speech’’ (through bioimpulses announcing rudimentary movements of our speech organs). Mostly, however, the invisible articulations need applying certain ‘‘echo-effects’’ to become perceivable. Usually, the ‘‘echoes’’ of the voice, noise, whistle, and light (cf. ‘‘lip-reading’’) are used. Accordingly, the acoustician’s mission in speech processing ought to be reduced to restoring—from the sound characteristics of utterances—the invisible changes within the throat of the speaker (through an improved ‘‘inverse mapping, the directional hearing methodology, etc.’’?). Starting from these data, physiologists and linguists could decipher messages using ‘‘lexicon of speech gestures.’’