Instance-hiding Schemes for Private Distributed Learning

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Yangsibo Huang, Zhao Song, Kai Li, Sanjeev Arora


<p>An important problem today is how to allow a group of decentralized entities to compute on their private data on a centralized deep net while protecting data privacy. Classic cryptographic techniques are too inefficient, so other methods have recently been suggested, e.g., differentially private Federated Learning. Here, a new method is introduced, inspired by the classic notion of {\em instance hiding} in cryptography. It uses the Mixup technique, proposed by {Zhang et al, ICLR 2018} as a way to improve generalization and robustness. Usual mixup involves training on nonnegative combinations of inputs. The new ideas in the current paper are: (a) new variants of mixup with negative as well as positive coefficients, and extend the sample-wise mixup to be pixel-wise. (b) Experiments demonstrating the effectiveness of this in protecting privacy against known attacks while preserving utility. (c) Theoretical analysis suggesting why this method is effective, using ideas from analyses of attacks. (d) Estimates of security and the release of a challenge dataset to allow the design of attack schemes.</p>