Recently, the research on ad-hoc microphone arrays with deep learning has drawn much attention, especially in speech enhancement and separation. An ad-hoc microphone array may cover such a large area where multiple speakers stand far apart and talk independently. Therefore, it is important to extract and trace a specific speaker in the ad-hoc array, which is called target-dependent speech separation, aiming to extract a target speaker from a mixed speech. However, this technique has not been explored yet. In this paper, we propose deep ad-hoc beamforming based on speaker extraction, which is to our knowledge the first work for target-dependent speech separation based on ad-hoc microphone arrays and deep learning. The algorithm contains three components. First, we propose a supervised channel selection framework based on speaker extraction, where the estimated utterance-level SNRs of the target speech are used as the basis for the channel selection. Second, we apply the selected channels to a deep learning based MVDR algorithm, where a single-channel speaker extraction algorithm is applied to each selected channel for estimating the mask of the target speech. We conducted an extensive experiment on WSJ0-adhoc corpus and Libri-adhoc40 corpus. Experimental results demonstrate the effectiveness of the proposed method in both simulation and real scenarios.