Abstract

In this paper, we propose a distributed algorithm for sensor network localization based on a maximum likelihood formulation. It relies on the Levenberg-Marquardt algorithm where the computations are distributed among different computational agents using message passing, or equivalently dynamic programming. The resulting algorithm provides a good localization accuracy, and it converges to the same solution as its centralized counterpart. Moreover, it requires fewer iterations and communications between computational agents as compared to first-order methods. The performance of the algorithm is demonstrated with extensive simulations in Julia in which it is shown that our method outperforms distributed methods that are based on approximate maximum likelihood formulations.

Highlights

  • 1 Introduction The problem we investigate in this paper is that of determining the locations of all sensors in a network, given noisy distance measurements between some sensors

  • It requires a good initialization, and we initialize it with an approximate estimate obtained from the algorithm proposed in [1], which is based on a convex relaxation of our nonlinear least-squares problem formulation

  • 7 Conclusion In this paper we proposed a distributed algorithm for maximum likelihood estimation for the localization problem which relies on the Levenberg-Marquardt algorithm and message passing over a tree

Read more

Summary

Introduction

The problem we investigate in this paper is that of determining the locations of all sensors in a network, given noisy distance measurements between some sensors. Another way to decrease the computational cost at each iteration is to consider a disk relaxation of the localization problem instead of an semi-definite programming relaxation Based on this idea, the authors in [1] and [18], devise distributed algorithms for solving the resulting problem which rely on projection based methods and Nestrov’s optimal gradient method, respectively. We will see that since the number of communications between agents in our algorithm is far less than the algorithm presented in [1], our algorithm can be utilized on top of the algorithm in [1], in order to improve the estimate in terms of accuracy with much less iterations than what are used in [1], and achieving better accuracy Note that both algorithms in [1] and [20] which are based on Nesterov’s gradient and alternating direction method of multipliers approaches, respectively, are first-order methods whereas the Levenberg– Marquardt algorithm is a pseudo-second-order method as it uses approximate Hessian information.

6: Assign it to agent k
Distributed computations
Results and discussion
Simulation data
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.