This paper proposes the blind separation of convolutive post-nonlinear (CPNL) mixtures based on the minimization of the penalized mutual information criterion. The proposed algorithm is based on the estimation score function difference (SFD) and the Newton optimization. Compared with the blind source separation of a linear mixture, the separation performance of a nonlinear mixture is strongly related to the accuracy of the score function estimation. Under this framework, the multivariate Edgeworth-expanded Gaussian mixture density is adopted to estimate the SFD, which preserves the higher-order statistical structure of the data as compared to the nonparametric density estimation. Also, the Newton optimization converges faster than the steepest descent gradient. In order to calculate the Hessian matrix, the Taylor expansion of the penalized mutual information criterion is extended to second order. The minimization of the penalized mutual information criterion ensures a priori normalization of the estimated sources, thus avoiding scale indeterminacy. The proposed algorithm has a better performance, and at the same time it speeds up the convergence. Simulations with computer-generated data and synthetic real-world data show the effectiveness of the proposed algorithm.