Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a common and high-risk sleep-related breathing disorder. Snoring detection is a simple and non-invasive method. In many studies, the feature maps are obtained by applying a short-time Fourier transform (STFT) and feeding the model with single-channel input tensors. However, this approach may limit the potential of convolutional networks to learn diverse representations of snore signals. This paper proposes a snoring sound detection algorithm using a multi-channel spectrogram and convolutional neural network (CNN). The sleep recordings from 30 subjects at the hospital were collected, and four different feature maps were extracted from them as model input, including spectrogram, Mel-spectrogram, continuous wavelet transform (CWT), and multi-channel spectrogram composed of the three single-channel maps. Three methods of data set partitioning are used to evaluate the performance of feature maps. The proposed feature maps were compared through the training set and test set of independent subjects by using a CNN model. The results show that the accuracy of the multi-channel spectrogram reaches 94.18%, surpassing that of the Mel-spectrogram that exhibits the best performance among the single-channel spectrograms. This study optimizes the system in the feature extraction stage to adapt to the superior feature learning capability of the deep learning model, providing a more effective feature map for snoring detection.