Multinomial logistic regression (MLR) is a useful tool for solving multi-classification problems. The lq,0(q⩾1) norm is an ideal regularization term for characterizing group sparsity in multinomial logistic regression and selecting important features in the high dimensional data. However, lq,0 regularized multinomial logistic regression (lq,0-MLR) is nonconvex, discontinuous, and NP-hard. Thus, most prior studies adopted a continuous approximation of the lq,0 norm. In this paper, we present a novel lq,0-proximal Newton algorithm (lq,0-PNA) to solve the lq,0-MLR. We first define a strong α-stationary point and prove that this point is a local minimizer of lq,0-MLR. We then convert such a point into a stationary equation and solve it by lq,0-PNA, which is a Newton-type method running on a group sparse subspace with a low computational cost. Furthermore, we establish a locally quadratic convergence of lq,0-PNA. Finally, numerical experiments on simulated and real data show the superiority of lq,0-PNA in terms of computational time and accuracy, when compared with six state-of-the-art solvers, especially for high dimensional data.