This study examined how listeners disambiguate an auditory scene comprising multiple competing unknown sources and determine a salient source. Experiment 1 replicated findings from McDermott, Wrobleski, and Oxenham. [(2011). Proc. Natl. Acad. Sci. U. S. A. 108(3), 1188-1193] using a multivariate Gaussian model to generate mixtures of two novel sounds. The results showed that listeners were unable to identify either sound in the mixture despite repeated exposure unless one sound was repeated several times while being mixed with a different distractor each time. The results support the idea that repetition provides a basis for segregating a single source from competing novel sounds. In subsequent experiments, the previous identification task was extended to a recognition task and the results were modeled. To confirm the repetition benefit, experiment 2 asked listeners to recognize a temporal ramp in either a repeating sound or non-repeating sounds. The results showed that perceptual salience of the repeating sound allowed robust recognition of its temporal ramp, whereas similar features were ignored in the non-repeating sounds. The response from two neural models of learning, generalized Hebbian learning and anti-Hebbian learning, were compared with the human listener results from experiment 2. The Hebbian network showed a similar response pattern as for the listener results, whereas the opposite pattern was observed for the anti-Hebbian output.