Particle filter (PF) is a method dedicated to posterior density estimations using weighted samples whose elements are called particles. In particular, this approach can be applied to object tracking in video sequences in complex situations and, in this paper, we focus on articulated object tracking, i.e., objects that can be decomposed as a set of subparts. One of PF's crucial step is a resampling step in which particles are resampled to avoid degeneracy problems. In this paper, we propose to exploit mathematical properties of articulated objects to swap conditionally independent subparts of the particles in order to generate new particle sets. We then introduce a new resampling method called Combinatorial Resampling that resamples over the particle set resulting from all the "admissible" swappings, the so-called combinatorial set. In essence, combinatorial resampling (CR) is quite similar to the combination of a crossover operator and a usual resampling, but there exists a fundamental difference between CR and the use of crossover operators: we prove that CR is sound, i.e., in a Bayesian framework, it is guaranteed to represent without any bias the posterior densities of the states over time. By construction, the particle sets produced by CR better represent the density to estimate over the whole state space than the original set and, therefore, CR produces higher quality samples. Unfortunately, the combinatorial set is generally of an exponential size and, therefore, to be scalable, we show how it can be implicitly constructed and resampled from, thus resulting in both an efficient and effective resampling scheme. Finally, through experimentations both on challenging synthetic and real video sequences, we also show that our resampling method outperforms all classical resampling methods both in terms of the quality of its results and in terms of computation times.