Automatic sign language recognition systems have been presented in the literature to avail as a helpful device for deaf and mute individuals to incorporate them into the community without impediments. Gesture recognition is usually done by using vision-based techniques or hand-attached sensors that track hand and finger movements. This paper presents a sensors-based Arabic Sign Language recognition system. The proposed system depends on the information fusion from a 3-axis accelerometer and gyroscope Inertial Measurement Units (IMUs) sensors. The architecture of the proposed system consists of two main modules: a wearable device (glove), which includes sensors, a communication unit, and micro-controller unit, and a software module running on a mobile device. The proposed glove system utilizes six IMUs sensors where five of them placed at the fingers and the sixth sensor placed at the back of the palm. The collected data from sensors are transmitted to a mobile device through a Bluetooth low-energy wireless transmission for data analysis and gesture recognition. For each gesture, the beginning and endpoints of the significative gesture part are discovered automatically using the intensity of the gyroscope signal. The system employs a features-based sensor fusion that combines accelerometer and gyroscope time-domain and frequency-domain features. In a comprehensive set of experiments with seven participants, the system was trained and tested with 28 Arabic alphabets, and the best recognition accuracy achieved was 98.6% for the user-dependent data and 96% for the user-independent data under support vector machine classifier. The achieved results are outperforming existing systems that have been presented in recent decades for alphabet-level gesture recognition.