This paper addresses the issue of estimating from a given data sequence the number of mixture components for a Gaussian mixture model(GMM). Our approach is to compute the normalized maximum likelihood (NML) code length for the data sequence relative to a GMM, then to find the mixture size that attains the minimum of the NML on the basis of the minimum description length principle. For finite domains, Kontkanen and Myllymaki proposed a method for efficient computation of the NML code length for specific models, however, for general classes over infinite domains, it has remained open how we compute the NML code length efficiently. We first propose a general method for calculating the NML code length for a general exponential family. Then, we apply it to the efficient computation of the NML code length for a GMM. The key idea is to restrict the data domain in combination with the technique of employing a generating function for computing the normalization term for a GMM. We use artificial datasets to empirically demonstrate that our estimate of the mixture size converges to the true one significantly faster than other criteria.