A robust, intuitive, effortless, and novel dynamic hand gesture based virtual keyboard system is developed in this study. Firstly, a new hierarchical approach is applied, which, based on self-coarticulation and position features, effectively sub-groups a large gesture vocabulary. Additionally, new trajectory features are proposed which shall extract the local structural statistics of the gestures. All state-of-art models are based on temporal trajectory features which are based on the frame-wise 2D sequential path it followed. Due to this, the trajectory features are path dependent and vulnerable to trajectory noises or any other variations in pattern, speed, or scale. In contrast to this, an image-based approach of gesture recognition has been proposed in this study, which is independent of the sequential gesturing path of the gesture. Since the image-models (a holistic view) are not obtained frame-wise, unlike existing image-models, they are pattern, speed, and scale invariant in nature and also immune to trajectory distortions. To this end, image-based features and significant trajectory features are fused to develop a hybrid hierarchical classification model which exhibits an exceptional increase in accuracy by 3.9% as compared to baseline non-hierarchical trajectory based model using an Artificial neural network (ANN). Classification models such as Voronoi diagram based classifier (VDBC) and neuro-fuzzy (NF) classifier have also been explored and displayed motivating performance. Reduction in misclassification has been observed for gestures such as ‘(and)’, ‘{and}’, ‘0 and O’, ‘Z and 2’. The present system can also identify any static/dynamic imposters present in the gesture environment.