People find it challenging to control smart systems with complex gaze gestures due to the vulnerability of eye saccades. Instead, the existing works achieved good recognition accuracy of simple gaze gestures because of sufficient eye gaze points but simple gaze gestures have limited applications compared to complex gaze gestures. Complex gaze gestures need a composition of multiple subunits of eye fixation to contain a sequence of gaze points that are clustered and rotated with an underlying complex head orientation relationship. This paper proposes a new set of eye gaze points and head orientation angles as new sequences to recognize complex gaze gestures. Eye gaze points and head orientation angles have a powerful influence on gaze gesture formation. The new sequence was obtained by aligning clustered gaze points and head orientation angles with a simple moving average (SMA) to denoise and interpolate the gap between successive eye fixations. The aligned new sequence of complex gaze gestures was utilized to train sequential machine learning (ML) algorithms. To evaluate the performance of the proposed method, we recruited and recorded the eye gaze and head orientation features of ten participants using an eye tracker. The results show that Boosted Hidden Markov Models (HMM) using Random Subspace methods achieved the best accuracies of 94.72% and 98.1% for complex, and simple gestures respectively, which outperformed the conventional methods.