A new framework named three subunit sign modeling is introduced for automatic sign language recognition. This works on continuous video sequences consisting of isolated words, signed sentences under different signer variations and illuminations. Three major issues of automatic sign language recognition is addressed namely: (i) importance of discriminative feature extraction and selection (ii) handling epenthesis movements and segmentation ambiguities (iii) automatic recognition of large vocabulary sign sentences and signer adaptation in a single subunit sign modeling framework. The proposed work has been evaluated and experimented subjectively and quantitatively with real-time signing videos gathered from different corpora and different sign languages. The results of the experiments have proven that the proposed subunit sign modeling framework remains scalable while increasing the sign vocabulary. Further, the approach is more reliable and efficient enough to adapt to real-time constraints and signer independence.