Higher Dimensional Consensus: Learning in Large-Scale Networks - Computer Science > Information TheoryReport as inadecuate




Higher Dimensional Consensus: Learning in Large-Scale Networks - Computer Science > Information Theory - Download this document for free, or read online. Document in PDF available to download.

Abstract: The paper presents higher dimension consensus HDC for large-scale networks.HDC generalizes the well-known average-consensus algorithm. It divides thenodes of the large-scale network into anchors and sensors. Anchors are nodeswhose states are fixed over the HDC iterations, whereas sensors are nodes thatupdate their states as a linear combination of the neighboring states. Underappropriate conditions, we show that the sensor states converge to a linearcombination of the anchor states. Through the concept of anchors, HDC capturesin a unified framework several interesting network tasks, including distributedsensor localization, leader-follower, distributed Jacobi to solve linearsystems of algebraic equations, and, of course, average-consensus. In manynetwork applications, it is of interest to learn the weights of the distributedlinear algorithm so that the sensors converge to a desired state. We term thisinverse problem the HDC learning problem. We pose learning in HDC as aconstrained non-convex optimization problem, which we cast in the framework ofmulti-objective optimization MOP and to which we apply Pareto optimality. Weprove analytically relevant properties of the MOP solutions and of the Paretofront from which we derive the solution to learning in HDC. Finally, the papershows how the MOP approach resolves interesting tradeoffs speed of convergenceversus quality of the final state arising in learning in HDC in resourceconstrained networks.



Author: Usman A. Khan, Soummya Kar, Jose M. F. Moura

Source: https://arxiv.org/







Related documents