Speech recognition based on EEG and MEG data is the first step in the development of BCI and AI systems for their further use in inner speech decoding. Great advances in this direction have been made using ECoG and stereo-EEG. At the same time, there are few works on this topic on the analysis of data obtained by non-invasive methods of recording brain activity. Our approach is based on the evaluation of connections in the space of sensors with the identification of a pattern of MEG connectivity specific for a given segment of speech. We tested our method on 7 subjects. In all cases, our processing pipeline was quite reliable and worked either without recognition errors or with a small number of errors. After “training”, the algorithm is able to recognise a fragment of oral speech with a single presentation. For recognition, we used segments of the MEG recording 50–1200 ms from the beginning of the sound of the word. For high-quality recognition, a segment of at least 600 ms was required. Intervals longer than 1200 ms worsened the recognition quality. Bandpass filtering of the MEG showed that the quality of recognition is equally effective in the entire frequency range. Some decrease in the level of recognition is observed only in the range of 9–14 Hz.