Machine teaching is an inverse problem of machine learning that aims at steering the student toward its target hypothesis, in which the teacher has already known the student's learning parameters. Previous studies on machine teaching focused on balancing the teaching risk and cost to find the best teaching examples deriving from the student model. This optimization solver is in general ineffective when the student does not disclose any cue of the learning parameters. To supervise such a teaching scenario, this article presents a distribution matching-based machine teaching strategy via iteratively shrinking the teaching cost in a smooth surrogate, which eliminates boundary perturbations from the version space. Technically, our strategy could be redefined as a cost-controlled optimization process that finds the optimal teaching examples without further exploring the parameter distribution of the student. Then, given any limited teaching cost, the training examples would have a closed-form expression. Theoretical analysis and experiment results demonstrate the effectiveness of this strategy.