Abstract

Learning based (LB) matting is an effective matting algorithm but its usability is greatly limited by the heavy computations. In this paper, we cast some new insights into this algorithm and provide following contributions. First, we present an explanation for LB algorithm from manifold learning perspective and unify LB into standard two-stage matting theory. Second, based on the features of two stages, we propose an acceleration scheme utilizing both CPU and GPU parallelism to speed up LB matting up to 15X. Third, we propose an image partition method which provides optimized loading balance and precision for CPU-based block-level parallelism. Finally, we analyze the performance of sparse linear solver used by general matting problems and provide default optimized choice for solver selection. The experiments on the latest parallel framework and mathematical library prove that our scheme performs well evaluated by both acceleration effects and precision.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call