The extent of loss of functionality in people with transhumeral amputation or elbow disarticulation is considered highly compromised. The person with this impairment loses the primary muscle involved in the movement of the hand and fingers, and the elbow flexion/extension. In this sense, the development of myoelectric prostheses must include mechanisms that replicate these movements. Moreover, the control system for controlling a prosthesis must be robust and intuitive, where pattern recognition and artificial intelligence techniques represent an important and increasingly explored alternative. This study aims to develop a movement intention classification system based on myoelectric signals (MES) obtained from an elbow disarticulation amputee. Different features were extracted from the MES as the Mean absolute value (MAV), zero crossings (ZC), slope sign changes (SSC), waveform length (WL), and autoregressive (AR) coefficients. Machine learning classifiers (Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Random Forests (RF), and Multilayer Perceptron (MLP)) were used to predict seven gestures: elbow flexion (EF), elbow extension (EE), forearm pronation (FP), forearm supination (FS), hand opening (HO), hand closing (HC), and rest (R). The results obtained with LDA, QDA, KNN, MLP, SVM, and RF were 60.84%, 71.05%, 77.10%, 78.06%, 78.57%, and 79.72% of accuracy, respectively. These results are promising and will allow the development of a myoelectric prosthesis control system for people with transhumeral amputation or elbow disarticulation.