You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Opacus](https://opacus.ai) is a library that enables training PyTorch models with differential privacy.
11
-
It supports training with minimal code changes required on the client, has little impact on training performance, and allows the client to online track the privacy budget expended at any given moment.
10
+
[Opacus](https://opacus.ai) is a library that enables training PyTorch models
11
+
with differential privacy. It supports training with minimal code changes
12
+
required on the client, has little impact on training performance, and allows
13
+
the client to online track the privacy budget expended at any given moment.
14
+
12
15
13
16
## Target audience
17
+
14
18
This code release is aimed at two target audiences:
15
-
1. ML practitioners will find this to be a gentle introduction to training a model with differential privacy as it requires minimal code changes.
16
-
2. Differential Privacy researchers will find this easy to experiment and tinker with, allowing them to focus on what matters.
17
19
20
+
1. ML practitioners will find this to be a gentle introduction to training a
21
+
model with differential privacy as it requires minimal code changes.
22
+
2. Differential Privacy researchers will find this easy to experiment and tinker
23
+
with, allowing them to focus on what matters.
18
24
19
25
## Installation
26
+
20
27
The latest release of Opacus can be installed via `pip`:
28
+
21
29
```bash
22
30
pip install opacus
23
31
```
32
+
24
33
OR, alternatively, via `conda`:
34
+
25
35
```bash
26
36
conda install -c conda-forge opacus
27
37
```
28
38
29
-
You can also install directly from the source for the latest features (along with its quirks and potentially occasional bugs):
39
+
You can also install directly from the source for the latest features (along
40
+
with its quirks and potentially occasional bugs):
41
+
30
42
```bash
31
43
git clone https://github.com/pytorch/opacus.git
32
44
cd opacus
33
45
pip install -e .
34
46
```
35
47
36
48
## Getting started
37
-
To train your model with differential privacy, all you need to do is to instantiate a `PrivacyEngine` and pass your model, data_loader, and optimizer to the engine's `make_private()` method to obtain their private counterparts.
49
+
50
+
To train your model with differential privacy, all you need to do is to
51
+
instantiate a `PrivacyEngine` and pass your model, data_loader, and optimizer to
52
+
the engine's `make_private()` method to obtain their private counterparts.
The [MNIST example](https://github.com/pytorch/opacus/tree/main/examples/mnist.py) shows an end-to-end run using Opacus. The [examples](https://github.com/pytorch/opacus/tree/main/examples/) folder contains more such examples.
We've built a series of IPython-based tutorials as a gentle introduction to training models
71
-
with privacy and using various Opacus features.
89
+
We've built a series of IPython-based tutorials as a gentle introduction to
90
+
training models with privacy and using various Opacus features.
72
91
73
92
-[Building an Image Classifier with Differential Privacy](https://github.com/pytorch/opacus/blob/main/tutorials/building_image_classifier.ipynb)
74
93
-[Training a differentially private LSTM model for name classification](https://github.com/pytorch/opacus/blob/main/tutorials/building_lstm_name_classifier.ipynb)
@@ -78,9 +97,13 @@ with privacy and using various Opacus features.
78
97
-[Opacus Guide: Module Validator and Fixer](https://github.com/pytorch/opacus/blob/main/tutorials/guide_to_module_validator.ipynb)
79
98
80
99
## Technical report and citation
81
-
The technical report introducing Opacus, presenting its design principles, mathematical foundations, and benchmarks can be found [here](https://arxiv.org/abs/2109.12298).
100
+
101
+
The technical report introducing Opacus, presenting its design principles,
102
+
mathematical foundations, and benchmarks can be found
103
+
[here](https://arxiv.org/abs/2109.12298).
82
104
83
105
Consider citing the report if you use Opacus in your papers, as follows:
106
+
84
107
```
85
108
@article{opacus,
86
109
title={Opacus: {U}ser-Friendly Differential Privacy Library in {PyTorch}},
@@ -92,21 +115,29 @@ Consider citing the report if you use Opacus in your papers, as follows:
92
115
93
116
### Blogposts and talks
94
117
95
-
If you want to learn more about DP-SGD and related topics, check out our series of blogposts and talks:
118
+
If you want to learn more about DP-SGD and related topics, check out our series
119
+
of blogposts and talks:
96
120
97
121
-[Differential Privacy Series Part 1 | DP-SGD Algorithm Explained](https://medium.com/pytorch/differential-privacy-series-part-1-dp-sgd-algorithm-explained-12512c3959a3)
98
122
-[Differential Privacy Series Part 2 | Efficient Per-Sample Gradient Computation in Opacus](https://medium.com/pytorch/differential-privacy-series-part-2-efficient-per-sample-gradient-computation-in-opacus-5bf4031d9e22)
99
123
-[PriCon 2020 Tutorial: Differentially Private Model Training with Opacus](https://www.youtube.com/watch?v=MWPwofiQMdE&list=PLUNOsx6Az_ZGKQd_p4StdZRFQkCBwnaY6&index=52)
100
124
-[Differential Privacy on PyTorch | PyTorch Developer Day 2020](https://www.youtube.com/watch?v=l6fbl2CBnq0)
101
125
-[Opacus v1.0 Highlights | PyTorch Developer Day 2021](https://www.youtube.com/watch?v=U1mszp8lzUI)
102
-
126
+
-[Enabling Fast Gradient Clipping and Ghost Clipping in Opacus](https://pytorch.org/blog/clipping-in-opacus/)
103
127
104
128
## FAQ
105
-
Check out the [FAQ](https://opacus.ai/docs/faq) page for answers to some of the most frequently asked questions about differential privacy and Opacus.
129
+
130
+
Check out the [FAQ](https://opacus.ai/docs/faq) page for answers to some of the
131
+
most frequently asked questions about differential privacy and Opacus.
106
132
107
133
## Contributing
108
-
See the [CONTRIBUTING](https://github.com/pytorch/opacus/tree/main/CONTRIBUTING.md) file for how to help out.
109
-
Do also check out the README files inside the repo to learn how the code is organized.
Copy file name to clipboardExpand all lines: docs/faq.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -108,7 +108,8 @@ Opacus computes and stores *per-sample* gradients under the hood. What this mean
108
108
109
109
Although we report expended privacy budget using the (epsilon, delta) language, internally, we track it using Rényi Differential Privacy (RDP) [[Mironov 2017](https://arxiv.org/abs/1702.07476), [Mironov et al. 2019](https://arxiv.org/abs/1908.10530)]. In short, (alpha, epsilon)-RDP bounds the [Rényi divergence](https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy#R%C3%A9nyi_divergence) of order alpha between the distribution of the mechanism’s outputs on any two datasets that differ in a single element. An (alpha, epsilon)-RDP statement is a relaxation of epsilon-DP but retains many of its important properties that make RDP particularly well-suited for privacy analysis of DP-SGD. The `alphas` parameter instructs the privacy engine what RDP orders to use for tracking privacy expenditure.
110
110
111
-
When the privacy engine needs to bound the privacy loss of a training run using (epsilon, delta)-DP for a given delta, it searches for the optimal order from among `alphas`. There’s very little additional cost in expanding the list of orders. We suggest using a list `[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64))`. You can pass your own alphas by passing `alphas=custom_alphas` when calling `privacy_engine.make_private_with_epsilon`.
111
+
When the privacy engine needs to bound the privacy loss of a training run using (epsilon, delta)-DP for a given delta, it searches for the optimal order from among `alphas`. There’s very little additional cost in expanding the list of orders. We suggest using a list `[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64))`.
112
+
<!-- You can pass your own alphas by passing `alphas=custom_alphas` when calling `privacy_engine.make_private_with_epsilon`. -->
112
113
113
114
A call to `privacy_engine.get_epsilon(delta=delta)` returns a pair: an epsilon such that the training run satisfies (epsilon, delta)-DP and an optimal order alpha. An easy diagnostic to determine whether the list of `alphas` ought to be expanded is whether the returned value alpha is one of the two boundary values of `alphas`.
0 commit comments