Stochastic optimization algorithms such as genetic algorithm (GA), particle swarm optimization (PSO), estimation of distribution algorithms (EDAs), and nested partitions algorithm (NPA) are used in many problems including nonlinear model predictive control and task assignment. Some of these algorithms, however, lack global convergence guarantee such as PSO, or require strict convergence assumptions such as NPA. To enhance these methods in terms of convergence, a common underlying framework towards representing the seemingly unrelated methods is established as the updating of the distribution of the population through iterative sampling, and the methods that fit into this framework are called population distribution-based methods. Global convergence conditions for this framework are innovatively developed by building a shadow NPA structure for the population evolution process. The result is generic and is capable of analyzing convergence of many methods including GA, PSO, EDA, and NPA. It can be further exploited to improve convergence by modifying these methods. The existing and modified variants of these methods are then applied to case studies to show the improvement.