Skip to content

Commit 872b430

Browse files
Yuchen Zhangfacebook-github-bot
authored andcommitted
spelling check pt2 on docstrings in opacus folder (#486)
Summary: ## Types of changes - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [x] Docs change / refactoring / dependency upgrade ## Motivation and Context / Related issue Check spelling errors for the docstrings in ```opacus``` folder by using IDE and ```sphinxcontrib.spelling``` Related to issue #380 ## How Has This Been Tested (if it applies) ## Checklist - [x] The documentation is up-to-date with the changes I made. - [x] I have read the **CONTRIBUTING** document and completed the CLA (see **CONTRIBUTING**). - [ ] All tests passed, and additional code has been covered with new tests. Pull Request resolved: #486 Reviewed By: karthikprasad Differential Revision: D39083864 Pulled By: zycalice fbshipit-source-id: 83381650fe5e921d4c669ca89977067c11050007
1 parent 47eab19 commit 872b430

File tree

6 files changed

+10
-10
lines changed

6 files changed

+10
-10
lines changed

opacus/accountants/accountant.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ def get_optimizer_hook_fn(
7272
"""
7373
Returns a callback function which can be used to attach to DPOptimizer
7474
Args:
75-
sample_rate: Expected samping rate used for accounting
75+
sample_rate: Expected sampling rate used for accounting
7676
"""
7777

7878
def hook_fn(optim: DPOptimizer):
@@ -88,7 +88,7 @@ def hook_fn(optim: DPOptimizer):
8888

8989
def state_dict(self, destination: T_state_dict = None) -> T_state_dict:
9090
"""
91-
Retruns a dictionary containing the state of the accountant.
91+
Returns a dictionary containing the state of the accountant.
9292
Args:
9393
destination: a mappable object to populate the current state_dict into.
9494
If this arg is None, an OrderedDict is created and populated.

opacus/optimizers/ddp_perlayeroptimizer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ def __init__(
6767
class DistributedPerLayerOptimizer(DPOptimizer):
6868
"""
6969
:class:`~opacus.optimizers.optimizer.DPOptimizer` that implements
70-
per layer clipping strategy and is compatible with distibured data parallel
70+
per layer clipping strategy and is compatible with distributed data parallel
7171
"""
7272

7373
def __init__(

opacus/optimizers/optimizer.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ def _generate_noise(
113113
reference: The reference Tensor to get the appropriate shape and device
114114
for generating the noise
115115
generator: The PyTorch noise generator
116-
secure_mode: boolean showing if "secure" noise need to be generate
116+
secure_mode: boolean showing if "secure" noise need to be generated
117117
(see the notes)
118118
119119
Notes:
@@ -186,7 +186,7 @@ class DPOptimizer(Optimizer):
186186
Examples:
187187
>>> module = MyCustomModel()
188188
>>> optimizer = torch.optim.SGD(module.parameters(), lr=0.1)
189-
>>> dp_optimzer = DPOptimizer(
189+
>>> dp_optimizer = DPOptimizer(
190190
... optimizer=optimizer,
191191
... noise_multiplier=1.0,
192192
... max_grad_norm=1.0,

opacus/utils/module_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ def requires_grad(module: nn.Module, *, recurse: bool = False) -> bool:
7272
Args:
7373
module: PyTorch module whose parameters are to be examined.
7474
recurse: Flag specifying if the gradient requirement check should
75-
be applied recursively to sub-modules of the specified module
75+
be applied recursively to submodules of the specified module
7676
7777
Returns:
7878
Flag indicate if any parameters require gradients

opacus/utils/tensor_utils.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ def calc_sample_norms(
3030
Calculates the norm of the given tensors for each sample.
3131
3232
This function calculates the overall norm of the given tensors for each sample,
33-
assuming the each batch's dim is zero.
33+
assuming each batch's dim is zero.
3434
3535
Args:
3636
named_params: An iterator of tuples <name, param> with name being a
@@ -61,7 +61,7 @@ def calc_sample_norms_one_layer(param: torch.Tensor) -> torch.Tensor:
6161
Calculates the norm of the given tensor (a single parameter) for each sample.
6262
6363
This function calculates the overall norm of the given tensor for each sample,
64-
assuming the each batch's dim is zero.
64+
assuming each batch's dim is zero.
6565
6666
It is equivalent to:
6767
`calc_sample_norms(named_params=((None, param),))[0]`
@@ -90,7 +90,7 @@ def sum_over_all_but_batch_and_last_n(
9090
Calculates the sum over all dimensions, except the first
9191
(batch dimension), and excluding the last n_dims.
9292
93-
This function will ignore the first dimension and it will
93+
This function will ignore the first dimension, and it will
9494
not aggregate over the last n_dims dimensions.
9595
9696
Args:

opacus/utils/uniform_sampler.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ class DistributedUniformWithReplacementSampler(Sampler):
6868
(plus or minus one sample)
6969
3. Each replica selects each sample of its chunk independently
7070
with probability `sample_rate`
71-
4. Each replica ouputs the selected samples, which form a local batch
71+
4. Each replica outputs the selected samples, which form a local batch
7272
7373
The sum of the lengths of the local batches follows a Poisson distribution.
7474
In particular, the expected length of each local batch is:

0 commit comments

Comments
 (0)