Privacy Preservation in Distributed Subgradient Optimization Algorithms
Mathematics - Optimization and Control | Computer Science - Distributed, Parallel, and Cluster Computing
Privacy preservation is becoming an increasingly important issue in data mining and machine learning. In this paper, we consider the privacy preserving features of distributed subgradient optimization algorithms. We first show that a well-known distributed subgradient synchronous optimization algorithm, in which all agents make their optimization updates simultaneously at all times, is not privacy preserving in the sense that the malicious agent can learn other agents' subgradients asymptotically. Then we propose a distributed subgradient projection asynchronous optimization algorithm without relying on any existing privacy preservation technique, where agents can exchange data between neighbors directly. In contrast to synchronous algorithms, in the new asynchronous algorithm agents make their optimization updates asynchronously. The introduced projection operation and asynchronous optimization mechanism can guarantee that the proposed asynchronous optimization algorithm is privacy preserving. Moreover, we also establish the optimal convergence of the newly proposed algorithm. The proposed privacy preservation techniques shed light on developing other privacy preserving distributed optimization algorithms.