Abstract

The most challenging aspect of particle filtering hardware implementation is the resampling step. This is because of high latency as it can be only partially executed in parallel with the other steps of particle filtering and has no inherent parallelism inside it. To reduce the latency, an improved resampling architecture is proposed which involves pre-fetching from the weight memory in parallel to the fetching of a value from a random function generator along with architectures for realizing the pre-fetch technique. This enables a particle filter using M particles with otherwise streaming operation to get new inputs more often than 2M cycles as the previously best approach gives. Results show that a pre-fetch buffer of five values achieves the best area-latency reduction trade-off while on average achieving an 85% reduction in latency for the resampling step leading to a sample time reduction of more than 40%. We also propose a generic division-free architecture for the resampling steps. It also removes the need of explicitly ordering the random values for efficient multinomial resampling implementation. In addition, on-the-fly computation of the cumulative sum of weights is proposed which helps reduce the word length of the particle weight memory. FPGA implementation results show that the memory size is reduced by up to 50%.

Highlights

  • In problems that involve hidden Markov models (HMM) with unobserved states and cannot be evaluated analytically, filtering refers to estimation of that state from a set of observations that are corrupted by noise

  • Particle filters track x(n) by updating a random measure {xk(n), wk(n)}Mk=1, which consists of M particles xk(n) and their weights wk(n) defined at time n, recursively

  • This random measure is used to approximate the posterior density of the unknown vector [3]

Read more

Summary

Introduction

In problems that involve hidden Markov models (HMM) with unobserved states and cannot be evaluated analytically, filtering refers to estimation of that state from a set of observations that are corrupted by noise In other words, it refers to determining the state probability distribution at time instance n, given observations up to n. The states evolve and capturing this evolution of state enables the prediction of the state in the future [3, 9, 11,12,13,14, 27] It can be described by the following set of equations x(n) = f (x(n − 1), z(n − 1))

Present address
Architectures for Particle Filters
Resampling in Particle Filters
Proposed Techniques
Reduction in Resampling Latency – Pre-Fetch
Hardware Architecture for Pre-Fetch Resampling
Latency Reduction
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call