Multi-channel acoustic source localization evaluates direction-dependentinter-microphone differences in order to estimate the position of an acousticsource embedded in an interfering sound field. We here investigate a deep neuralnetwork (DNN) approach to source localization that improves on previous workwith learned, linear support-vector-machine localizers. DNNs with depthsbetween 4 and 15 layers were trained to predict azimuth direction of targetspeech in 72 directional bins of width 5 degree, embedded in an isotropic,multi-speech-source noise field. Several system parameters were varied, inparticular number of microphones in the bilateral hearing aid scenario wasset to 2, 4, and 6, respectively.
 Results show that DNNs provide a clear improvement inlocalization performance over a linear classifier reference system.Increasing the number of microphones from 2 to 4 results in a larger increase ofperformance for the DNNs than for the linear system. However, 6 microphonesprovide only a small additional gain. The DNN architectures perform betterwith 4 microphones than the linear approach does with 6 microphones, thusindicating that location-specific information in source-interference scenariosis encoded non-linearly in the sound field.