Train PyTorch models with Differential Privacy
Project description
pytorch-dp: Train PyTorch models with Differential Privacy
pytorch-dp is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance and allows the client to online track the privacy budget expended at any given moment.
PyTorch Privacy is currently a preview beta and under active development!
Target audience
This code release is aimed at two target audiences:
- ML practicioners will find this code a gentle introduction to training a model with differential privacy as it requires minimal code changes.
- Differential Privacy scientists will find this code easy to experiment and tinker with, allowing them to focus on what matters.
Installation
pip:
pip install pytorch-dp
From source:
git clone https://github.com/facebookresearch/pytorch-dp.git
cd pytorch-dp
pip install -e .
Getting started
To train your model with differential privacy, all you need to do is to declare a PrivacyEngine and attach it to your optimizer before running, eg:
model = Net()
optimizer = SGD(model.parameters(), lr=0.05)
privacy_engine = PrivacyEngine(
model,
batch_size,
sample_size,
alphas=[1, 10, 100],
noise_multiplier=1.3,
max_grad_norm=1.0,
)
privacy_engine.attach(optimizer)
# Now it's business as usual
The MNIST example contains an end to end run.
Contributing
See the CONTRIBUTING file for how to help out.
References
- Mironov, Ilya. "Rényi differential privacy." 2017 IEEE 30th Computer Security Foundations Symposium (CSF). IEEE, 2017.
- Abadi, Martin, et al. "Deep learning with differential privacy." Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016.
- Mironov, Ilya, Kunal Talwar, and Li Zhang. "R'enyi Differential Privacy of the Sampled Gaussian Mechanism." arXiv preprint arXiv:1908.10530 (2019).
- Goodfellow, Ian. "Efficient per-example gradient computations." arXiv preprint arXiv:1510.01799 (2015).
- McMahan, H. Brendan, and Galen Andrew. "A general approach to adding differential privacy to iterative training procedures." arXiv preprint arXiv:1812.06210 (2018).
License
This code is released under Apache 2.0, as found in the LICENSE file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pytorch-dp-0.1b1.tar.gz
.
File metadata
- Download URL: pytorch-dp-0.1b1.tar.gz
- Upload date:
- Size: 37.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.7.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 65452d4754f76652030f3f847e29af72d87435142ee6f628d75e53c728067371 |
|
MD5 | 0300f29ef73f26738d25b25557a3354f |
|
BLAKE2b-256 | 9b34b42c653095834d4bbd4f6ac97e87e96880b57ad4f91680339a83507b6afc |
Provenance
File details
Details for the file pytorch_dp-0.1b1-py3-none-any.whl
.
File metadata
- Download URL: pytorch_dp-0.1b1-py3-none-any.whl
- Upload date:
- Size: 50.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.4.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.7.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0a3ed95297efd8619078b02db19d5b2d5f5d6ed38051f4bf847ec2cce69f2af3 |
|
MD5 | 4105e7caf41fd1e392c8b774aedd66c3 |
|
BLAKE2b-256 | 2a30bf42b71ca0d592063c75637ace731ad309bfbfa38aee2967d5be70b599f2 |