Abstract

AbstractWe study a weighted online bipartite matching problem: G(V 1, V 2, E) is a weighted bipartite graph where V 1 is known beforehand and the vertices of V 2 arrive online. The goal is to match vertices of V 2 as they arrive to vertices in V 1, so as to maximize the sum of weights of edges in the matching. If assignments to V 1 cannot be changed, no bounded competitive ratio is achievable. We study the weighted online matching problem with free disposal, where vertices in V 1 can be assigned multiple times, but only get credit for the maximum weight edge assigned to them over the course of the algorithm. For this problem, the greedy algorithm is 0.5-competitive and determining whether a better competitive ratio is achievable is a well known open problem.We identify an interesting special case where the edge weights are decomposable as the product of two factors, one corresponding to each end point of the edge. This is analogous to the well studied related machines model in the scheduling literature, although the objective functions are different. For this case of decomposable edge weights, we design a 0.5664 competitive randomized algorithm in complete bipartite graphs. We show that such instances with decomposable weights are non-trivial by establishing upper bounds of 0.618 for deterministic and 0.8 for randomized algorithms.A tight competitive ratio of 1 − 1/e ≈ 0.632 was known previously for both the 0-1 case as well as the case where edge weights depend on the offline vertices only, but for these cases, reassignments cannot change the quality of the solution. Beating 0.5 for weighted matching where reassignments are necessary has been a significant challenge. We thus give the first online algorithm with competitive ratio strictly better than 0.5 for a non-trivial case of weighted matching with free disposal.KeywordsGreedy AlgorithmEdge WeightCompetitive RatioOnline AlgorithmWeighted MatchThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call