A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.
-
Updated
Sep 23, 2025 - Python
A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.
Code for ML Doctor
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
[CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks
[ICML 2023] "On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation"
reveal the vulnerabilities of SplitNN
[KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"
[ICML 2023] "On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation"
Official code for paper: Z. Zhang, X. Wang, J. Huang and S. Zhang, "Analysis and Utilization of Hidden Information in Model Inversion Attacks," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2023.3295942
[CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks
torchplus is a utilities library that extends pytorch and torchvision
Official code for paper: Z. Zhang and J. Huang, “Aligning the domains in cross domain model inversion attack,” Neural Networks, vol. 178, p. 106490, Oct. 2024, doi: 10.1016/j.neunet.2024.106490.
Implementation of the model inversion attack on the Gated-Recurrent-Unit neural network
Official code for paper: Z. Zhang and J. Huang, “Exploiting the connections between images and deep feature vectors in model inversion attacks,” Neurocomputing, p. 131457, Sept. 2025, doi: 10.1016/j.neucom.2025.131457.
Add a description, image, and links to the model-inversion-attacks topic page so that developers can more easily learn about it.
To associate your repository with the model-inversion-attacks topic, visit your repo's landing page and select "manage topics."