Skip to content
@Nousphera

Noösphere

Faster, Fairer, Safer, Stronger — Decentralized ML

🚀 Vision

🌐 Make Decentralized ML

Faster ⚡   |   Fairer ⚖️   |   Smarter 🧠   |   Stronger 💪

We believe the future of machine learning is decentralized — where intelligence grows collaboratively, without data ever leaving its source.

Our mission is to accelerate decentralized ML and make it the de facto paradigm for building intelligent, efficient, and privacy-preserving systems.

No central authority — just collective learning, where everyone contributes — safely, privately, and efficiently.


🧠 What We’re Building

We push the boundaries of learning and optimization at the edge, with research to create scalable, efficient, and multimodal decentralized learning systems.

🔍 Key Directions

  • Federated & Split Learning — frameworks for decentralized collaboration across heterogeneous devices and institutions.
  • 🧩 Foundation Model Fine-Tuning — adapting large-scale (multimodal) Foundation Models (FMs) to decentralized and resource-constrained environments.
  • 🔒 Privacy-Preserving Mechanisms — integrating differential privacy, encryption, and secure aggregation into multimodal FMs.
  • 🛰️ Edge & On-Device Intelligence — enabling lightweight, self-improving models that learn directly where data is generated.
  • 🔄 Decentralized Optimization & Aggregation — redefining how distributed models synchronize, exchange knowledge, and evolve without central coordination.

🧩 Projects

We’re developing a growing ecosystem of open-source projects — spanning Efficiency, Adaptivity, Privacy, and Edge FMs — to accelerate the future of decentralized intelligence.

🌐 Project 🧾 Description 🎯 Focus Area 🏛️       Venue      
FedSTAR Semi-supervised FL with adaptive reliability. 🧭 Adaptivity ICASSP 2022
FedLN FL under label noise. 🧭 Adaptivity NeurIPS 2022 Workshop
FedCompress Task-adaptive model compression for efficient FL. ⚡ Efficiency ICASSP 2024
EncCluster Scalable FM secure aggregation through weight clustering. 🔒 Privacy NeurIPS 2024 Workshop
DeltaMask Communication-efficient federated FM fine-tuning via masking. ⚡ Efficiency / 🛰️ Edge FMs ICML 2024 Workshop
MPSL Multimodal FM fine-tuning via parallel SL. 🛰️ Edge FMs / ⚡ Efficiency IJCAI 2025 Workshop
MaTU Many-task federated FM fine-tuning via unified task vectors. 🧭 Adaptivity / 🛰️ Edge FMs IJCAI 2025
EFU Enforcable Federated Unlearning. 🧭 Privacy CIKMI 2025

Accelerating the future of decentralized intelligence — together.

Pinned Loading

  1. MPSL MPSL Public

    Fine-tuning multimodal models using Parallel Split Learning

    Python 5 1

Repositories

Showing 3 of 3 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…