Human echolocation describes how people, often blind, use reflected sounds to obtain information about their ambient world. Using auditory models for three perceptual variables, loudness, pitch and one aspect of timbre, namely sharpness, we determined how these variables can make people detect objects by echolocation. We used acoustic recordings and the resulting perceptual data from a previous study with stationary situations, as input to our analysis. One part of the analysis was on the physical room acoustics of the sounds, i.e. sound pressure level, autocorrelation and spectral centroid. In a second part we used auditory models to analyze echolocation resulting from the perceptual variables loudness, pitch and sharpness. Based on these results, a third part was the calculation of psychophysical thresholds with a non-parametric method for detecting a reflecting object with constant physical size for distance, loudness, pitch and sharpness. Difference thresholds were calculated for the psychophysical variables, since a 2-Alternative-Forced-Choice Paradigm had originally been used. We determined (1) detection thresholds based on repetition pitch, loudness and sharpness varied and their dependency on room acoustics and type of sound stimuli. We found (2) that repetition pitch was useful for detection at shorter distances and was determined from the peaks in the temporal profile of the autocorrelation function, (3) loudness at shorter distances provides echolocation information, (4) at longer distances, timbre aspects, such as sharpness, might be used to detect objects. (5) It is suggested that blind persons may detect objects at lower values for loudness, pitch strength and sharpness and at further distances than sighted persons. We also discuss the auditory model approach. Autocorrelation was assumed as a proper measure for pitch, but we ask whether a mechanism based on strobe integration is a viable possibility.