Abstract
The Probability Hypothesis Density (PHD) filter is a promising technique in terms of computational complexity to solve the multiple targets tracking problem. However, the amount of computation is prohibitive in critical situations when the clutter intensity and sample rate are high. Therefore, the execution time of the sequential particle PHD filter cannot meet the requirement for real-time processing applications. To address this problem, we propose a parallel scheme for efficient implementation of particle PHD filter on clusters of multicore distributed memory architecture. Since particles can be treated separately and spread among processors, the prediction and update step can be readily performed in parallel. However, the resampling and estimation step become the bottleneck that significantly affects the speedup and scalability achieved by the parallel implementation of particle PHD filter for the requirement of joint processing of all particles. We propose an approach to fulfill parallel resampling and stratified estimation in a unified architecture. Particle exchange to rebalance the work load among computing nodes is also discussed. Experiment results show that tracking performance of the parallel version is almost equivalent to or even better than the sequential one, while in terms of execution time we can achieve a tremendous speedup.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have