Abstract

In multi-hop radio networks, such as wireless ad-hoc networks and wireless sensor networks, nodes employ a MAC (Medium Access Control) protocol such as TDMA to coordinate accesses to the shared medium and to avoid interference of close-by transmissions. These protocols can be implemented using standard node coloring. The (Δ+1)-coloring problem is to color all nodes in as few timeslots as possible using at most Δ+1 colors such that any two nodes within distance R are assigned different colors, where R is a given parameter and Δ is the maximum degree of the modeled unit disk graph using R as a scaling factor. Being one of the most fundamental problems in distributed computing, this problem is well studied and there is a long chain of algorithms prescribed for it. However, all previous works are based on abstract models, such as message passing models and graph based interference models, which limit the utility of these algorithms in practice. In this paper, for the first time, we consider the distributed (Δ+1)-coloring problem under the more practical SINR interference model. In particular, without requiring any knowledge about the neighborhood, we propose a novel randomized (Δ+1)-coloring algorithm with time complexity O(Δlog⁡n+log2⁡n). For the case where nodes cannot adjust their transmission power, we give an O(Δlog2⁡n) randomized algorithm, which only incurs a logarithmic multiplicative factor overhead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call