Most of the studies on Alzheimer's disease (AD) have been carried out using medical images. However, the acquisition of medical images data is difficult. The identification based on the patient's speech data can effectively reduce the medical cost, and the speech data can be collected in a non-invasive manner so that the patient's data can be collected in real-time and accurately. This paper proposes a new method that uses the spectrogram features extracted from speech data to identify AD, which can help families to understand the disease development of patients in an earlier stage, so that they can take measures in advance to delay the disease development. We use the speech data collected from the elderly that express the speech features displayed in the speech and used the machine learning methods for identifying AD. During the simulation and experiment, we collect a new speech dataset, which includes Alzheimer's disease patients and healthy control subjects. Then, we compare with the speech data made available by the Dem@Care project. Among the tested models, LogisticgressionCV model exhibited the best performance. It is shown that this method using extracted spectrogram features from speech data to identify AD is feasible. The credibility of the new dataset and feasibility of the used methods in this paper are demonstrated.