Skip to content

Commit 913bfeb

Browse files
iden-kalemajfacebook-github-bot
authored andcommitted
Fix issue with setting of param_groups/defaults/state for the DPOptimizer wrapper (meta-pytorch#660)
Summary: Pull Request resolved: meta-pytorch#660 Fix for github issue # [649](meta-pytorch#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameter of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Differential Revision: D60453849
1 parent f1d0e02 commit 913bfeb

File tree

1 file changed

+45
-0
lines changed

1 file changed

+45
-0
lines changed

opacus/optimizers/optimizer.py

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515
from __future__ import annotations
1616

1717
import logging
18+
from collections import defaultdict
1819
from typing import Callable, List, Optional, Union
1920

2021
import torch
@@ -376,6 +377,50 @@ def accumulated_iterations(self) -> int:
376377
)
377378
return vals[0]
378379

380+
@property
381+
def param_groups(self) -> List[dict]:
382+
"""
383+
Returns a list containing a dictionary of all parameters managed by the optimizer.
384+
"""
385+
return self.original_optimizer.param_groups
386+
387+
@param_groups.setter
388+
def param_groups(self, param_groups: List[dict]):
389+
"""
390+
Updates the param_groups of the optimizer, where param_groups is a list containg a dictionary
391+
of all parameters mangaged by the optimizer.
392+
"""
393+
self.original_optimizer.param_groups = param_groups
394+
395+
396+
@property
397+
def state(self) -> defaultdict:
398+
"""
399+
Returns a dictionary holding current optimization state.
400+
"""
401+
return self.original_optimizer.state
402+
403+
@state.setter
404+
def state(self, state: defaultdict):
405+
"""
406+
Updates the state of the optimizer, where state is a dictionary holding current optimization state.
407+
"""
408+
self.original_optimizer.state = state
409+
410+
@property
411+
def defaults(self) -> dict:
412+
"""
413+
Returns a dictionary containing default values for optimization.
414+
"""
415+
return self.original_optimizer.defaults
416+
417+
@defaults.setter
418+
def defaults(self, defaults: dict):
419+
"""
420+
Updates the defaults of the optimizer, where defaults is a dictionary containing default values for optimization.
421+
"""
422+
self.original_optimizer.defaults = defaults
423+
379424
def attach_step_hook(self, fn: Callable[[DPOptimizer], None]):
380425
"""
381426
Attaches a hook to be executed after gradient clipping/noising, but before the

0 commit comments

Comments
 (0)