You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We feel strongly that having a consistent code style is extremely important, so
41
46
CircleCI will fail on your PR if it does not adhere to the black or flake8 formatting style or isort import ordering.
@@ -96,7 +101,7 @@ Run following command from `website` folder. It will build the docs and serve th
96
101
```
97
102
98
103
You can also perform spell checks on documentation automatically (besides IDEs) using [```sphinxcontrib-spelling```](https://sphinxcontrib-spelling.readthedocs.io/en/latest/install.html)
99
-
Note that you will also need [```PyEnchant```](https://pyenchant.github.io/pyenchant/) to run ```sphinxcontrib-spelling```, and thus the Enchant C library. Use this guide for ```PyEnchant```.
104
+
Note that you will also need [```PyEnchant```](https://pyenchant.github.io/pyenchant/) to run ```sphinxcontrib-spelling```, and thus the Enchant C library. Use this guide for ```PyEnchant```.
100
105
101
106
Steps:
102
107
1. Install the extension with pip: ```pip install sphinxcontrib-spelling```
@@ -22,6 +23,13 @@ This code release is aimed at two target audiences:
22
23
2. Differential Privacy researchers will find this easy to experiment and tinker
23
24
with, allowing them to focus on what matters.
24
25
26
+
27
+
## Latest updates
28
+
29
+
2024-12-18: We updated this [tutorial](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb) to show how [LoRA](https://arxiv.org/abs/2106.09685) and [peft](https://huggingface.co/docs/peft/en/index) library could be used in conjuncture with DP-SGD.
30
+
31
+
2024-08-20: We introduced [Fast Gradient Clipping](https://arxiv.org/abs/2009.03106) and Ghost Clipping(https://arxiv.org/abs/2110.05679) to Opacus, significantly reducing the memory requirements of DP-SGD. Please refer to our [blogpost](https://pytorch.org/blog/clipping-in-opacus/) for more information.
32
+
25
33
## Installation
26
34
27
35
The latest release of Opacus can be installed via `pip`:
@@ -75,23 +83,16 @@ shows an end-to-end run using Opacus. The
We've built a series of IPython-based tutorials as a gentle introduction to
90
91
training models with privacy and using various Opacus features.
91
92
93
+
-[Building text classifier with Differential Privacy on BERT](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb)
92
94
-[Building an Image Classifier with Differential Privacy](https://github.com/pytorch/opacus/blob/main/tutorials/building_image_classifier.ipynb)
93
95
-[Training a differentially private LSTM model for name classification](https://github.com/pytorch/opacus/blob/main/tutorials/building_lstm_name_classifier.ipynb)
94
-
-[Building text classifier with Differential Privacy on BERT](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb)
95
96
-[Opacus Guide: Introduction to advanced features](https://github.com/pytorch/opacus/blob/main/tutorials/intro_to_advanced_features.ipynb)
96
97
-[Opacus Guide: Grad samplers](https://github.com/pytorch/opacus/blob/main/tutorials/guide_to_grad_sampler.ipynb)
97
98
-[Opacus Guide: Module Validator and Fixer](https://github.com/pytorch/opacus/blob/main/tutorials/guide_to_module_validator.ipynb)
@@ -118,12 +119,12 @@ Consider citing the report if you use Opacus in your papers, as follows:
118
119
If you want to learn more about DP-SGD and related topics, check out our series
119
120
of blogposts and talks:
120
121
122
+
-[Enabling Fast Gradient Clipping and Ghost Clipping in Opacus](https://pytorch.org/blog/clipping-in-opacus/)
121
123
-[Differential Privacy Series Part 1 | DP-SGD Algorithm Explained](https://medium.com/pytorch/differential-privacy-series-part-1-dp-sgd-algorithm-explained-12512c3959a3)
122
124
-[Differential Privacy Series Part 2 | Efficient Per-Sample Gradient Computation in Opacus](https://medium.com/pytorch/differential-privacy-series-part-2-efficient-per-sample-gradient-computation-in-opacus-5bf4031d9e22)
123
125
-[PriCon 2020 Tutorial: Differentially Private Model Training with Opacus](https://www.youtube.com/watch?v=MWPwofiQMdE&list=PLUNOsx6Az_ZGKQd_p4StdZRFQkCBwnaY6&index=52)
124
126
-[Differential Privacy on PyTorch | PyTorch Developer Day 2020](https://www.youtube.com/watch?v=l6fbl2CBnq0)
125
127
-[Opacus v1.0 Highlights | PyTorch Developer Day 2021](https://www.youtube.com/watch?v=U1mszp8lzUI)
126
-
-[Enabling Fast Gradient Clipping and Ghost Clipping in Opacus](https://pytorch.org/blog/clipping-in-opacus/)
Copy file name to clipboardExpand all lines: docs/faq.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,8 +13,8 @@ Yes! Opacus is open-source for public use, and it is licensed under the [Apache
13
13
14
14
## How can I report a bug or ask a question?
15
15
16
-
You can report bugs by submitting GitHub issues. To submit a GitHub issue, please [click here](https://github.com/pytorch/opacus/issues).
17
-
You can ask questions in our dedicated PyTorch [Discussion Forum](https://discuss.pytorch.org/c/opacus/29). We actively monitor questions in the PyTorch forums with the category `Opacus`.
16
+
You can report bugs or ask questions by submitting GitHub issues. To submit a GitHub issue, please [click here](https://github.com/pytorch/opacus/issues).
17
+
<!--You can ask questions in our dedicated PyTorch [Discussion Forum](https://discuss.pytorch.org/c/opacus/29). We actively monitor questions in the PyTorch forums with the category `Opacus`.-->
18
18
19
19
## I'd like to contribute to Opacus. How can I do that?
20
20
@@ -76,7 +76,7 @@ If these interventions don’t help (or the model starts to converge but its pri
76
76
77
77
## How to deal with out-of-memory errors?
78
78
79
-
Dealing with per-sample gradients will inevitably put more pressure on your memory: after all, if you want to train with batch size 64, you are looking to keep 64 copies of your parameter gradients. The first sanity check to do is to make sure that you don’t go out of memory with "standard" training (without DP). That should guarantee that you can train with batch size of 1 at least. Then, you can check your memory usage with e.g. `nvidia-smi` as usual, gradually increasing the batch size until you find your sweet spot. Note that this may mean that you still train with small batch size, which comes with its own training behavior (i.e. higher variance between batches). Training with larger batch sizes can be beneficial, and we built `virtual_step` to make this possible while still memory efficient (see *what is virtual batch size* in these FAQs).
79
+
Dealing with per-sample gradients will inevitably put more pressure on your memory: after all, if you want to train with batch size 64, you are looking to keep 64 copies of your parameter gradients. The first sanity check to do is to make sure that you don’t go out of memory with "standard" training (without DP). That should guarantee that you can train with batch size of 1 at least. Then, you can check your memory usage with e.g. `nvidia-smi` as usual, gradually increasing the batch size until you find your sweet spot. Note that this may mean that you still train with small batch size, which comes with its own training behavior (i.e. higher variance between batches). Training with larger batch sizes can be beneficial. To this end, we built [Fast Gradient Clipping](https://pytorch.org/blog/clipping-in-opacus/) and `virtual_step`(see *what is virtual batch size* in these FAQs) to make DP-SGD memory efficient.
80
80
81
81
## What does epsilon=1.1 really mean? How about delta?
0 commit comments