Privacy Preservation in Distributed Subgradient Optimization Algorithms

Preprint English OPEN
Lou, Youcheng ; Yu, Lean ; Wang, Shouyang (2015)
  • Subject: Mathematics - Optimization and Control | Computer Science - Distributed, Parallel, and Cluster Computing

Privacy preservation is becoming an increasingly important issue in data mining and machine learning. In this paper, we consider the privacy preserving features of distributed subgradient optimization algorithms. We first show that a well-known distributed subgradient synchronous optimization algorithm, in which all agents make their optimization updates simultaneously at all times, is not privacy preserving in the sense that the malicious agent can learn other agents' subgradients asymptotically. Then we propose a distributed subgradient projection asynchronous optimization algorithm without relying on any existing privacy preservation technique, where agents can exchange data between neighbors directly. In contrast to synchronous algorithms, in the new asynchronous algorithm agents make their optimization updates asynchronously. The introduced projection operation and asynchronous optimization mechanism can guarantee that the proposed asynchronous optimization algorithm is privacy preserving. Moreover, we also establish the optimal convergence of the newly proposed algorithm. The proposed privacy preservation techniques shed light on developing other privacy preserving distributed optimization algorithms.
  • References (27)
    27 references, page 1 of 3

    [1] A. Nedi´c and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Trans. Autom. Control, vol. 54, no. 1, pp. 48-61, Jan. 2009.

    [2] A. Nedi´c, A. Ozdaglar, and P. A. Parrilo, “Constrained consensus and optimization in multi-agent networks,” IEEE Trans. Autom. Control, vol. 55, no. 4, pp. 922-938, Apr. 2010.

    [3] S. S. Ram, A. Nedi´c, and V. V. Veeravalli, “Distributed stochastic subgradient projection algorithms for convex optimization,” J. Optim. Theory Appl., vol. 147, no. 3, pp. 516-545, Jul. 2010.

    [4] I. Lobel and A. Ozdaglar, Distributed subgradient methods for convex optimization over random networks, IEEE Trans. Automat. Control, 56 (2011) 1291-1306.

    [5] B. Johansson, M. Rabi, and M. Johansson, “A randomized incremental subgradient method for distributed optimization in networked systems,” SIAM J. Optim., vol. 20, no. 3, pp. 1157-1170, 2010.

    [6] J. C. Duchi, A. Agarwal, and M. J. Wainwright, “Dual averaging for distributed optimization: Convergence analysis and network scaling,” IEEE Trans. Autom. Control, vol. 57, no. 3, pp. 592-606, Mar. 2012.

    [7] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1-122, Jul. 2011.

    [8] W. Shi, Q. Ling, K. Yuan, G. Wu, and W. Yin, “On the linear convergence of the ADMM in decentralized consensus optimization,” IEEE Trans. Sig. Proc., vol. 62, no. 7, pp. 1750-1761, Apr. 2014.

    [9] Y. Lou, Y. Hong, L. Xie, G. Shi, and K. H. Johansson, “Nash equilibrium computation in subnetwork zero-sum games with switching communications,” IEEE Trans. Autom. Control, vol. 61, no. 10, Oct. 2016, to appear, A previous Arxiv version is available at http://arxiv.org/abs/1312.7050

    [10] Y. Lou, G. Shi, K. H. Johansson, and Y. Hong, “Convergence of random sleep algorithms for optimal consensus,” Systems & Control Letters, vol. 62, no. 12, pp. 1196-1202, Dec. 2013.

  • Metrics
    No metrics available
Share - Bookmark