Skip to content

Commit c0b8223

Browse files
authored
Merge branch 'pytorch:main' into main
2 parents 72971ef + 57f0349 commit c0b8223

38 files changed

+1616
-39
lines changed

.github/workflows/ci_cpu.yml

Lines changed: 26 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ jobs:
3232
- name: Lint with black
3333
run: black --check --diff --color .
3434
- name: Check import order with isort
35-
run: isort -v -l 88 -o opacus --lines-after-imports 2 -m 3 --trailing-comma --check-only .
35+
run: isort -l 88 -o opacus --lines-after-imports 2 -m 3 --trailing-comma --check-only .
3636

3737
########### UNIT TESTS ##############
3838
unittest_py38_torch_release:
@@ -52,13 +52,21 @@ jobs:
5252
- name: Run unit tests
5353
run: |
5454
mkdir unittest-py38-release-reports
55-
coverage run -m pytest --doctest-modules -p conftest --junitxml=unittest-py38-release-reports/junit.xml opacus
55+
coverage run -m pytest --doctest-modules -p conftest opacus
5656
coverage report -i -m
57+
# Format into xml to be used for coveralls
58+
coverage xml -i
5759
- name: Store test results
5860
uses: actions/upload-artifact@v4
5961
with:
6062
name: unittest-py38-release-reports
6163
path: unittest-py38-release-reports
64+
- name: Send coverage to Coveralls (parallel)
65+
uses: coverallsapp/github-action@v2
66+
with:
67+
format: cobertura
68+
parallel: true
69+
flag-name: run-1
6270

6371
unittest_py39_torch_release:
6472
runs-on: ubuntu-latest
@@ -77,13 +85,19 @@ jobs:
7785
- name: Run unit tests
7886
run: |
7987
mkdir unittest-py39-release-reports
80-
coverage run -m pytest --doctest-modules -p conftest --junitxml=unittest-py39-release-reports/junit.xml opacus
81-
coverage report -i -m
88+
coverage run -m pytest --doctest-modules -p conftest opacus
89+
coverage xml -i
8290
- name: Store test results
8391
uses: actions/upload-artifact@v4
8492
with:
8593
name: unittest-py39-release-reports
8694
path: unittest-py39-release-reports
95+
- name: Send coverage to Coveralls (parallel)
96+
uses: coverallsapp/github-action@v2
97+
with:
98+
format: cobertura
99+
parallel: true
100+
flag-name: run-2
87101

88102
prv_accountant_values:
89103
runs-on: ubuntu-latest
@@ -150,11 +164,18 @@ jobs:
150164
coverage run examples/mnist.py --lr 0.25 --sigma 0.7 -c 1.5 --batch-size 64 --epochs 1 --data-root runs/mnist/data --n-runs 1 --device cpu
151165
python -c "import torch; accuracy = torch.load('run_results_mnist_0.25_0.7_1.5_64_1.pt'); exit(0) if (accuracy[0]>0.78 and accuracy[0]<0.95) else exit(1)"
152166
coverage report -i -m
167+
coverage xml -i
153168
- name: Store test results
154169
uses: actions/upload-artifact@v4
155170
with:
156171
name: mnist-cpu-reports
157172
path: runs/mnist/test-reports
173+
- name: Send coverage to Coveralls (parallel)
174+
uses: coverallsapp/github-action@v2
175+
with:
176+
format: cobertura
177+
parallel: true
178+
flag-name: run-3
158179

159180
######## FINISH COVERALLS ##########
160181
finish_coveralls_parallel:
@@ -168,3 +189,4 @@ jobs:
168189
with:
169190
github_token: ${{ secrets.GITHUB_TOKEN }}
170191
parallel-finished: true
192+
carryforward: "run-1,run-2,run-3"

CONTRIBUTING.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,11 +31,16 @@ for advanced usage).
3131

3232
Opacus also uses [isort](https://github.com/timothycrosley/isort) to sort imports
3333
alphabetically and separate into sections. isort is installed easily via
34-
pip using `pip install isort`, and run locally by calling
34+
pip using `pip install isort --upgrade`, and run locally by calling
3535
```bash
36-
isort -v -l 88 -o opacus --lines-after-imports 2 -m 3 --trailing-comma .
36+
isort -l 88 -o opacus --lines-after-imports 2 -m 3 --trailing-comma .
3737
```
3838
from the repository root. Configuration for isort is located in .isort.cfg.
39+
If using `isort` versions `<5.0.0` call
40+
```bash
41+
isort -l 88 -o opacus --lines-after-imports 2 -m 3 --trailing-comma --recursive
42+
```
43+
3944

4045
We feel strongly that having a consistent code style is extremely important, so
4146
CircleCI will fail on your PR if it does not adhere to the black or flake8 formatting style or isort import ordering.
@@ -96,7 +101,7 @@ Run following command from `website` folder. It will build the docs and serve th
96101
```
97102

98103
You can also perform spell checks on documentation automatically (besides IDEs) using [```sphinxcontrib-spelling```](https://sphinxcontrib-spelling.readthedocs.io/en/latest/install.html)
99-
Note that you will also need [```PyEnchant```](https://pyenchant.github.io/pyenchant/) to run ```sphinxcontrib-spelling```, and thus the Enchant C library. Use this guide for ```PyEnchant```.
104+
Note that you will also need [```PyEnchant```](https://pyenchant.github.io/pyenchant/) to run ```sphinxcontrib-spelling```, and thus the Enchant C library. Use this guide for ```PyEnchant```.
100105

101106
Steps:
102107
1. Install the extension with pip: ```pip install sphinxcontrib-spelling```

README.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22

33
<hr/>
44

5+
[![PyPI Downloads](https://static.pepy.tech/badge/opacus)](https://pepy.tech/projects/opacus)
56
[![GitHub Actions](https://github.com/pytorch/opacus/actions/workflows/ci_cpu.yml/badge.svg)](https://github.com/pytorch/opacus/actions/workflows/ci_cpu.yml)
67
[![Coverage Status](https://coveralls.io/repos/github/pytorch/opacus/badge.svg?branch=main)](https://coveralls.io/github/pytorch/opacus?branch=main)
78
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](CONTRIBUTING.md)
@@ -22,6 +23,13 @@ This code release is aimed at two target audiences:
2223
2. Differential Privacy researchers will find this easy to experiment and tinker
2324
with, allowing them to focus on what matters.
2425

26+
27+
## Latest updates
28+
29+
2024-12-18: We updated this [tutorial](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb) to show how [LoRA](https://arxiv.org/abs/2106.09685) and [peft](https://huggingface.co/docs/peft/en/index) library could be used in conjuncture with DP-SGD.
30+
31+
2024-08-20: We introduced [Fast Gradient Clipping](https://arxiv.org/abs/2009.03106) and Ghost Clipping(https://arxiv.org/abs/2110.05679) to Opacus, significantly reducing the memory requirements of DP-SGD. Please refer to our [blogpost](https://pytorch.org/blog/clipping-in-opacus/) for more information.
32+
2533
## Installation
2634

2735
The latest release of Opacus can be installed via `pip`:
@@ -75,23 +83,16 @@ shows an end-to-end run using Opacus. The
7583
[examples](https://github.com/pytorch/opacus/tree/main/examples/) folder
7684
contains more such examples.
7785

78-
### Migrating to 1.0
79-
80-
Opacus 1.0 introduced many improvements to the library, but also some breaking
81-
changes. If you've been using Opacus 0.x and want to update to the latest
82-
release, please use this
83-
[Migration Guide](https://github.com/pytorch/opacus/blob/main/Migration_Guide.md)
84-
8586
## Learn more
8687

8788
### Interactive tutorials
8889

8990
We've built a series of IPython-based tutorials as a gentle introduction to
9091
training models with privacy and using various Opacus features.
9192

93+
- [Building text classifier with Differential Privacy on BERT](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb)
9294
- [Building an Image Classifier with Differential Privacy](https://github.com/pytorch/opacus/blob/main/tutorials/building_image_classifier.ipynb)
9395
- [Training a differentially private LSTM model for name classification](https://github.com/pytorch/opacus/blob/main/tutorials/building_lstm_name_classifier.ipynb)
94-
- [Building text classifier with Differential Privacy on BERT](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb)
9596
- [Opacus Guide: Introduction to advanced features](https://github.com/pytorch/opacus/blob/main/tutorials/intro_to_advanced_features.ipynb)
9697
- [Opacus Guide: Grad samplers](https://github.com/pytorch/opacus/blob/main/tutorials/guide_to_grad_sampler.ipynb)
9798
- [Opacus Guide: Module Validator and Fixer](https://github.com/pytorch/opacus/blob/main/tutorials/guide_to_module_validator.ipynb)
@@ -118,12 +119,12 @@ Consider citing the report if you use Opacus in your papers, as follows:
118119
If you want to learn more about DP-SGD and related topics, check out our series
119120
of blogposts and talks:
120121

122+
- [Enabling Fast Gradient Clipping and Ghost Clipping in Opacus](https://pytorch.org/blog/clipping-in-opacus/)
121123
- [Differential Privacy Series Part 1 | DP-SGD Algorithm Explained](https://medium.com/pytorch/differential-privacy-series-part-1-dp-sgd-algorithm-explained-12512c3959a3)
122124
- [Differential Privacy Series Part 2 | Efficient Per-Sample Gradient Computation in Opacus](https://medium.com/pytorch/differential-privacy-series-part-2-efficient-per-sample-gradient-computation-in-opacus-5bf4031d9e22)
123125
- [PriCon 2020 Tutorial: Differentially Private Model Training with Opacus](https://www.youtube.com/watch?v=MWPwofiQMdE&list=PLUNOsx6Az_ZGKQd_p4StdZRFQkCBwnaY6&index=52)
124126
- [Differential Privacy on PyTorch | PyTorch Developer Day 2020](https://www.youtube.com/watch?v=l6fbl2CBnq0)
125127
- [Opacus v1.0 Highlights | PyTorch Developer Day 2021](https://www.youtube.com/watch?v=U1mszp8lzUI)
126-
- [Enabling Fast Gradient Clipping and Ghost Clipping in Opacus](https://pytorch.org/blog/clipping-in-opacus/)
127128

128129
## FAQ
129130

dev_requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ flake8
88
sphinx
99
sphinx-autodoc-typehints
1010
mypy>=0.760
11-
isort
11+
isort>=5.0.0
1212
hypothesis
1313
tensorboard
1414
datasets

docs/faq.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@ Yes! Opacus is open-source for public use, and it is licensed under the [Apache
1313

1414
## How can I report a bug or ask a question?
1515

16-
You can report bugs by submitting GitHub issues. To submit a GitHub issue, please [click here](https://github.com/pytorch/opacus/issues).
17-
You can ask questions in our dedicated PyTorch [Discussion Forum](https://discuss.pytorch.org/c/opacus/29). We actively monitor questions in the PyTorch forums with the category `Opacus`.
16+
You can report bugs or ask questions by submitting GitHub issues. To submit a GitHub issue, please [click here](https://github.com/pytorch/opacus/issues).
17+
<!-- You can ask questions in our dedicated PyTorch [Discussion Forum](https://discuss.pytorch.org/c/opacus/29). We actively monitor questions in the PyTorch forums with the category `Opacus`. -->
1818

1919
## I'd like to contribute to Opacus. How can I do that?
2020

@@ -76,7 +76,7 @@ If these interventions don’t help (or the model starts to converge but its pri
7676

7777
## How to deal with out-of-memory errors?
7878

79-
Dealing with per-sample gradients will inevitably put more pressure on your memory: after all, if you want to train with batch size 64, you are looking to keep 64 copies of your parameter gradients. The first sanity check to do is to make sure that you don’t go out of memory with "standard" training (without DP). That should guarantee that you can train with batch size of 1 at least. Then, you can check your memory usage with e.g. `nvidia-smi` as usual, gradually increasing the batch size until you find your sweet spot. Note that this may mean that you still train with small batch size, which comes with its own training behavior (i.e. higher variance between batches). Training with larger batch sizes can be beneficial, and we built `virtual_step` to make this possible while still memory efficient (see *what is virtual batch size* in these FAQs).
79+
Dealing with per-sample gradients will inevitably put more pressure on your memory: after all, if you want to train with batch size 64, you are looking to keep 64 copies of your parameter gradients. The first sanity check to do is to make sure that you don’t go out of memory with "standard" training (without DP). That should guarantee that you can train with batch size of 1 at least. Then, you can check your memory usage with e.g. `nvidia-smi` as usual, gradually increasing the batch size until you find your sweet spot. Note that this may mean that you still train with small batch size, which comes with its own training behavior (i.e. higher variance between batches). Training with larger batch sizes can be beneficial. To this end, we built [Fast Gradient Clipping](https://pytorch.org/blog/clipping-in-opacus/) and `virtual_step` (see *what is virtual batch size* in these FAQs) to make DP-SGD memory efficient.
8080

8181
## What does epsilon=1.1 really mean? How about delta?
8282

opacus/optimizers/adaclipoptimizer.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ def __init__(
5353
loss_reduction: str = "mean",
5454
generator=None,
5555
secure_mode: bool = False,
56+
**kwargs,
5657
):
5758
super().__init__(
5859
optimizer,

opacus/optimizers/ddp_perlayeroptimizer.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ def __init__(
4848
loss_reduction: str = "mean",
4949
generator=None,
5050
secure_mode: bool = False,
51+
**kwargs,
5152
):
5253
self.rank = torch.distributed.get_rank()
5354
self.world_size = torch.distributed.get_world_size()
@@ -79,6 +80,7 @@ def __init__(
7980
loss_reduction: str = "mean",
8081
generator=None,
8182
secure_mode: bool = False,
83+
**kwargs,
8284
):
8385
self.rank = torch.distributed.get_rank()
8486
self.world_size = torch.distributed.get_world_size()

opacus/optimizers/ddpoptimizer.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,7 @@ def __init__(
3838
loss_reduction: str = "mean",
3939
generator=None,
4040
secure_mode: bool = False,
41+
**kwargs,
4142
):
4243
super().__init__(
4344
optimizer,
@@ -47,6 +48,7 @@ def __init__(
4748
loss_reduction=loss_reduction,
4849
generator=generator,
4950
secure_mode=secure_mode,
51+
**kwargs,
5052
)
5153
self.rank = torch.distributed.get_rank()
5254
self.world_size = torch.distributed.get_world_size()

opacus/optimizers/ddpoptimizer_fast_gradient_clipping.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,7 @@ def __init__(
3838
loss_reduction: str = "mean",
3939
generator=None,
4040
secure_mode: bool = False,
41+
**kwargs,
4142
):
4243
super().__init__(
4344
optimizer,
@@ -47,6 +48,7 @@ def __init__(
4748
loss_reduction=loss_reduction,
4849
generator=generator,
4950
secure_mode=secure_mode,
51+
**kwargs,
5052
)
5153
self.rank = torch.distributed.get_rank()
5254
self.world_size = torch.distributed.get_world_size()

opacus/optimizers/optimizer.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -205,6 +205,7 @@ def __init__(
205205
loss_reduction: str = "mean",
206206
generator=None,
207207
secure_mode: bool = False,
208+
**kwargs,
208209
):
209210
"""
210211

0 commit comments

Comments
 (0)