3,382 publications found
Sort by
The Online Broadcast Range-Assignment Problem

AbstractLet $$P=\{p_0,\ldots ,p_{n-1}\}$$ P = { p 0 , … , p n - 1 } be a set of points in $${\mathbb R}^d$$ R d , modeling devices in a wireless network. A range assignment assigns a range $$r(p_i)$$ r ( p i ) to each point $$p_i\in P$$ p i ∈ P , thus inducing a directed communication graph $$\mathcal {G}_r$$ G r in which there is a directed edge $$(p_i,p_j)$$ ( p i , p j ) iff $${{\,\textrm{dist}\,}}(p_i, p_j) \leqslant r(p_i)$$ dist ( p i , p j ) ⩽ r ( p i ) , where $${{\,\textrm{dist}\,}}(p_i,p_j)$$ dist ( p i , p j ) denotes the distance between $$p_i$$ p i and $$p_j$$ p j . The range-assignment problem is to assign the transmission ranges such that $$\mathcal {G}_r$$ G r has a certain desirable property, while minimizing the cost of the assignment; here the cost is given by $$\sum _{p_i\in P} r(p_i)^{\alpha }$$ ∑ p i ∈ P r ( p i ) α , for some constant $$\alpha >1$$ α > 1 called the distance-power gradient. We introduce the online version of the range-assignment problem, where the points $$p_j$$ p j arrive one by one, and the range assignment has to be updated at each arrival. Following the standard in online algorithms, resources given out cannot be taken away—in our case this means that the transmission ranges will never decrease. The property we want to maintain is that $$\mathcal {G}_r$$ G r has a broadcast tree rooted at the first point $$p_0$$ p 0 . Our results include the following. We prove that already in $${\mathbb R}^1$$ R 1 , a 1-competitive algorithm does not exist. In particular, for distance-power gradient $$\alpha =2$$ α = 2 any online algorithm has competitive ratio at least 1.57. For points in $${\mathbb R}^1$$ R 1 and $${\mathbb R}^2$$ R 2 , we analyze two natural strategies for updating the range assignment upon the arrival of a new point $$p_j$$ p j . The strategies do not change the assignment if $$p_j$$ p j is already within range of an existing point, otherwise they increase the range of a single point, as follows: Nearest-Neighbor (nn) increases the range of $${{\,\textrm{nn}\,}}(p_j)$$ nn ( p j ) , the nearest neighbor of $$p_j$$ p j , to $${{\,\textrm{dist}\,}}(p_j, {{\,\textrm{nn}\,}}(p_j))$$ dist ( p j , nn ( p j ) ) , and Cheapest Increase (ci) increases the range of the point $$p_i$$ p i for which the resulting cost increase to be able to reach the new point $$p_j$$ p j is minimal. We give lower and upper bounds on the competitive ratio of these strategies as a function of the distance-power gradient $$\alpha $$ α . We also analyze the following variant of nn in $${\mathbb R}^2$$ R 2 for $$\alpha =2$$ α = 2 : 2-Nearest-Neighbor (2-nn) increases the range of $${{\,\textrm{nn}\,}}(p_j)$$ nn ( p j ) to $$2\cdot {{\,\textrm{dist}\,}}(p_j,{{\,\textrm{nn}\,}}(p_j))$$ 2 · dist ( p j , nn ( p j ) ) , We generalize the problem to points in arbitrary metric spaces, where we present an $$O(\log n)$$ O ( log n ) -competitive algorithm.

Open Access
Relevant
Opinion Dynamics with Limited Information

AbstractWe study opinion formation games based on the famous model proposed by Friedkin and Johsen (FJ model). In today’s huge social networks the assumption that in each round agents update their opinions by taking into account the opinions of all their friends is unrealistic. So, we are interested in the convergence properties of simple and natural variants of the FJ model that use limited information exchange in each round and converge to the same stable point. As in the FJ model, we assume that each agent i has an intrinsic opinion $$s_i \in [0,1]$$ s i ∈ [ 0 , 1 ] and maintains an expressed opinion $$x_i(t) \in [0,1]$$ x i ( t ) ∈ [ 0 , 1 ] in each round t. To model limited information exchange, we consider an opinion formation process where each agent i meets with one random friend j at each round t and learns only her current opinion $$x_j(t)$$ x j ( t ) . The amount of influence j imposes on i is reflected by the probability $$p_{ij}$$ p ij with which i meets j. Then, agent i suffers a disagreement cost that is a convex combination of $$(x_i(t) - s_i)^2$$ ( x i ( t ) - s i ) 2 and $$(x_i(t) - x_j(t))^2$$ ( x i ( t ) - x j ( t ) ) 2 . An important class of dynamics in this setting are no regret dynamics, i.e. dynamics that ensure vanishing regret against the experienced disagreement cost to the agents. We show an exponential gap between the convergence rate of no regret dynamics and of more general dynamics that do not ensure no regret. We prove that no regret dynamics require roughly $$\varOmega (1/\varepsilon )$$ Ω ( 1 / ε ) rounds to be within distance $$\varepsilon $$ ε from the stable point of the FJ model. On the other hand, we provide an opinion update rule that does not ensure no regret and converges to $$x^*$$ x ∗ in $$\tilde{O}(\log ^2(1/\varepsilon ))$$ O ~ ( log 2 ( 1 / ε ) ) rounds. Finally, in our variant of the FJ model, we show that the agents can adopt a simple opinion update rule that ensures no regret to the experienced disagreement cost and results in an opinion vector that converges to the stable point $$x^*$$ x ∗ of the FJ model within distance $$\varepsilon $$ ε in $$\textrm{poly}(1/\varepsilon )$$ poly ( 1 / ε ) rounds. In view of our lower bound for no regret dynamics this rate of convergence is close to best possible.

Open Access
Relevant
Probabilistic Analysis of Optimization Problems on Sparse Random Shortest Path Metrics

AbstractSimple heuristics for (combinatorial) optimization problems often show a remarkable performance in practice. Worst-case analysis often falls short of explaining this performance. Because of this, “beyond worst-case analysis” of algorithms has recently gained a lot of attention, including probabilistic analysis of algorithms. The instances of many (combinatorial) optimization problems are essentially a discrete metric space. Probabilistic analysis for such metric optimization problems has nevertheless mostly been conducted on instances drawn from Euclidean space, which provides a structure that is usually heavily exploited in the analysis. However, most instances from practice are not Euclidean. Little work has been done on metric instances drawn from other, more realistic, distributions. Some initial results have been obtained in recent years, where random shortest path metrics generated from dense graphs (either complete graphs or Erdős–Rényi random graphs) have been used so far. In this paper we extend these findings to sparse graphs, with a focus on sparse graphs with ‘fast growing cut sizes’, i.e. graphs for which $$|\delta (U)|=\Omega (|U|^\varepsilon )$$ | δ ( U ) | = Ω ( | U | ε ) for some constant $$\varepsilon \in (0,1)$$ ε ∈ ( 0 , 1 ) for all subsets U of the vertices, where $$\delta (U)$$ δ ( U ) is the set of edges connecting U to the remaining vertices. A random shortest path metric is constructed by drawing independent random edge weights for each edge in the graph and setting the distance between every pair of vertices to the length of a shortest path between them with respect to the drawn weights. For such instances generated from a sparse graph with fast growing cut sizes, we prove that the greedy heuristic for the minimum distance maximum matching problem, and the nearest neighbor and insertion heuristics for the traveling salesman problem all achieve a constant expected approximation ratio. Additionally, for instances generated from an arbitrary sparse graph, we show that the 2-opt heuristic for the traveling salesman problem also achieves a constant expected approximation ratio.

Open Access
Relevant