Skip to content

Tanguyvans/cluster-shuffling-fl

Repository files navigation

Cluster Shuffling Federated Learning

A privacy-preserving federated learning system with cluster shuffling, SMPC, and gradient pruning for communication-efficient, secure distributed training.

Python 3.8+ PyTorch License

πŸš€ Quick Start

# Install
git clone https://github.com/Tanguyvans/cluster-shuffling-fl.git
cd cluster-shuffling-fl
pip3 install -r requirements.txt

# Run
python3 main.py

Result: Federated learning on CIFAR-10 with 6 clients, 10 rounds, 80% communication savings from gradient pruning!

πŸ“– New to FL? β†’ Quickstart Guide


✨ Key Features

Privacy & Security

  • πŸ”„ Cluster Shuffling: Dynamic client reorganization prevents long-term inference
  • πŸ” SMPC: Secret sharing (additive & Shamir's) protects model updates
  • πŸ›‘οΈ Differential Privacy: Calibrated noise for formal privacy guarantees

Communication Efficiency

  • πŸ“‰ Gradient Pruning (NEW!): 80% communication reduction via Deep Gradient Compression (DGC)
  • ⚑ Top-k Sparsification: Send only 10% of gradients with momentum correction
  • πŸ”— Compatible: Works with SMPC, DP, and all privacy mechanisms

Attack Evaluation

  • βš”οΈ Poisoning Attacks: 6 attack types (Label Flip, IPM, ALIE, Backdoor, etc.)
  • πŸ” Privacy Attacks: Gradient inversion, membership inference
  • πŸ“Š Comprehensive Metrics: PSNR, accuracy, communication overhead

Byzantine Robustness

  • Krum, Multi-Krum
  • Trimmed Mean, Median
  • FLTrust - Trust-based aggregation

πŸ“š Documentation

Getting Started

Core Features

Attack Evaluation

Measurement

πŸ“– Full Documentation Index


🎯 Use Cases

Research & Evaluation

# Test gradient pruning impact
"gradient_pruning": {"enabled": True, "keep_ratio": 0.1}
python3 main.py

# Compare attack resistance
python3 run_grad_inv.py --config aggressive

Privacy Evaluation

# Enable all privacy mechanisms
"diff_privacy": True,
"clustering": True,
"type_ss": "shamir",
"gradient_pruning": {"enabled": True}

Attack Testing

# Test poisoning attacks
"poisoning_attacks": {
    "enabled": True,
    "malicious_clients": ["c0_1"],
    "attack_type": "ipm",
    "attack_intensity": 0.5
}

πŸ“Š Results

Communication Efficiency

Method Compression Savings Accuracy Impact
Baseline 1.0x 0% -
Gradient Pruning (k=0.1) 5.0x 80% <1%
Pruning (k=0.05) 10.0x 90% ~2%

Privacy Protection (PSNR - lower is better)

Defense Gradient Inversion PSNR Privacy Level
None 28 dB ❌ Vulnerable
SMPC 18 dB βœ… Moderate
SMPC + Pruning 15 dB βœ… Strong
SMPC + DP 12 dB βœ…βœ… Very Strong

Attack Resistance

Defense IPM Attack Impact Label Flip Impact
FedAvg -40% accuracy -35% accuracy
Krum -13% accuracy -8% accuracy
Krum + Clustering -4% accuracy -2% accuracy

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Clients   β”‚ ──► Local Training
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β”œβ”€β”€β–Ί Gradient Pruning (80% reduction)
       β”‚
       β”œβ”€β”€β–Ί SMPC Secret Sharing
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Aggregation β”‚ ──► Krum / FedAvg / FLTrust
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Global Modelβ”‚ ──► Broadcast to Clients
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ”§ Configuration

Edit config.py for quick customization:

# Dataset & Model
"name_dataset": "cifar10",      # cifar10, cifar100, ffhq128
"arch": "simplenet",            # simplenet, resnet18, mobilenet

# Federated Learning
"n_rounds": 10,                 # Training rounds
"number_of_clients_per_node": 6,# Clients per node

# Gradient Pruning (NEW!)
"gradient_pruning": {
    "enabled": True,            # 80% communication savings
    "keep_ratio": 0.1,          # Keep 10% of gradients
}

# Privacy
"diff_privacy": True,           # Enable DP
"clustering": True,             # Cluster shuffling

# Aggregation
"aggregation": {
    "method": "krum",           # fedavg, krum, fltrust
}

πŸ“– Complete Configuration Guide


πŸ“ Project Structure

cluster-shuffling-fl/
β”œβ”€β”€ main.py                     # Main FL orchestrator
β”œβ”€β”€ config.py                   # Configuration settings
β”‚
β”œβ”€β”€ docs/                       # πŸ“š Documentation
β”‚   β”œβ”€β”€ getting-started/        # Installation, quickstart, config
β”‚   β”œβ”€β”€ features/               # Gradient pruning, privacy, etc.
β”‚   β”œβ”€β”€ attacks/                # Poisoning, gradient inversion
β”‚   └── measurement/            # Metrics and evaluation
β”‚
β”œβ”€β”€ federated/                  # FL implementation
β”‚   β”œβ”€β”€ client.py               # Client training
β”‚   β”œβ”€β”€ server.py               # Server aggregation
β”‚   └── flower_client.py        # Flower wrapper
β”‚
β”œβ”€β”€ security/                   # Privacy mechanisms
β”‚   β”œβ”€β”€ secret_sharing.py       # SMPC implementation
β”‚   └── gradient_pruning.py     # DGC implementation
β”‚
β”œβ”€β”€ attacks/poisoning/          # Attack framework
β”‚   β”œβ”€β”€ labelflip_attack.py
β”‚   β”œβ”€β”€ ipm_attack.py
β”‚   └── ...
β”‚
└── models/architectures/       # Neural network models
    β”œβ”€β”€ simplenet.py
    β”œβ”€β”€ resnet.py
    └── ...

πŸ§ͺ Testing

# Test gradient pruning
python3 test_gradient_pruning.py

# Run gradient inversion attack
python3 run_grad_inv.py --config default

# Measure communication savings
python3 measure_communication.py --keep-ratio 0.1

πŸ“– Research & Papers

This framework implements and evaluates:

  • Deep Gradient Compression (Lin et al., ICLR 2018)
  • Cluster Shuffling for federated learning
  • Byzantine-robust aggregation (Krum, Trimmed Mean)
  • Gradient inversion attacks (DLG, iDLG, GIAS, GIFD)

See Research Papers for full citations.


🀝 Contributing

Contributions are welcome! Areas for improvement:

  • Additional attack implementations
  • More aggregation methods
  • Enhanced privacy mechanisms
  • Documentation improvements

πŸ“ License

This project is released under the MIT License. See LICENSE for details.


πŸ™ Acknowledgments

  • Flower - Federated learning framework
  • Opacus - Differential privacy library
  • PyTorch - Deep learning framework

πŸ“§ Contact

For questions or collaborations:


πŸš€ Ready to get started? β†’ Quickstart Guide

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •