Abstract

The Max-Product belief propagation algorithm has been widely used to process constraints associated with optimization problems in a broad range of application domains such as information theory, multi-agent systems, image processing, etc. The constraint optimization of a given problem is typically accomplished by performing inference with the use of a message passing process. During the process, the Max-Product algorithm performs repetitive maximization operation, which has been considered as one of the main reasons the algorithm can be computationally expensive. In more detail, scalability becomes a challenge when Max-Product has to deal with constraint functions with high arity and/or variables with a large domain size. In either case, the ensuing exponential growth of search space can make the maximization operator of the algorithm computationally infeasible in practice. In effect, it is frequently observed that the output of an algorithm becomes obsolete or unusable as the optimization process takes too long. Specifically, the issue of an algorithm taking too long to complete its internal inference process becomes more severe and prevalent as the size of the problem increases. As a result, the practical scalability of such algorithms is constrained. However, it is challenging to maintain the solution quality while reducing the computation cost of the algorithm. This is important because success in doing so will eventually reduce the algorithm’s overall completion time without compromising on the quality of its solution. To address this issue, we develop a generic pruning technique that enables the maximization operator of the Max-Product algorithm to operate on a significantly reduced search space of at least around 85% or more (i.e. empirical observation). Additionally, we demonstrate theoretically that the pruned search space obtained through our approach has no negative impact on the algorithm’s outcome. Finally, further empirical evidence notably suggests that our proposed approach brings down 50% to around 99% of the time required to complete a single maximization operation. DUJASE Vol. 7(1) 45-57, 2022 (January)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call