Ethical AI Governance and Bias Detection Platform
- Overview
- Key Features
- Architecture
- Getting Started
- API Documentation
- Frontend Features
- Technology Stack
- Project Structure
- Development
- Deployment
- Contributing
- Security
- License
FairMind is a production-ready AI Governance and Bias Detection Platform designed for modern AI systems. It provides comprehensive tools for detecting bias, generating compliance reports, and ensuring ethical AI development across Classic Machine Learning, Large Language Models (LLMs), and Multimodal systems.
FairMind helps organizations:
- Detect bias in AI models across multiple domains (Classic ML, LLMs, Multimodal)
- Automatically generate remediation code to fix detected biases
- Generate compliance reports for GDPR, EU AI Act, and other regulations
- Create AI Bill of Materials (BOM) for model transparency
- Integrate with MLOps tools (Weights & Biases, MLflow) for experiment tracking
- Monitor model performance and bias metrics in real-time
- Manage model lifecycle and governance
- Backend API: api.fairmind.xyz
- API Documentation: api.fairmind.xyz/docs
- Frontend Application: app-demo.fairmind.xyz
Classic Machine Learning Bias Detection
- Demographic Parity: Measures equal positive prediction rates across groups
- Equalized Odds: Ensures equal true positive and false positive rates
- Disparate Impact Analysis: Statistical parity difference calculation
- Individual Fairness: Counterfactual fairness testing
- Group Fairness: Multiple protected attribute analysis
Large Language Model (LLM) Bias Detection
- WEAT (Word Embedding Association Test): Detects implicit bias in word embeddings
- SEAT (Sentence Embedding Association Test): Tests bias in sentence-level embeddings
- Minimal Pairs Testing: Systematic bias detection through controlled comparisons
- Counterfactual Fairness: Tests model behavior under counterfactual scenarios
- Stereotype Detection: Identifies stereotypical associations in model outputs
Multimodal Bias Detection
- Image Generation Bias: Analyzes bias in image generation models (DALL-E, Stable Diffusion, etc.)
- Audio Generation Fairness: Tests bias in audio synthesis models
- Video Content Bias: Detects bias in video generation and analysis
- Cross-Modal Stereotype Analysis: Identifies bias across different modalities
- Representation Bias: Analyzes demographic representation in generated content
FairMind generates production-ready Python code to fix detected biases:
- Reweighting Strategies: Adjusts sample weights to balance protected groups
- Resampling Techniques: Oversampling/undersampling to address class imbalance
- Threshold Optimization: Finds optimal decision thresholds for fairness
- Model Retraining Pipelines: Complete retraining workflows with fairness constraints
- Post-Processing Methods: Calibration and adjustment techniques
- Pre-Processing Solutions: Data transformation and cleaning strategies
Seamless integration with experiment tracking platforms:
-
Weights & Biases Integration
- Automatic logging of bias test results
- Deep linking from FairMind results to W&B dashboards
- Experiment tracking and comparison
- Model versioning and registry
-
MLflow Integration
- Experiment tracking and model registry
- Artifact storage and management
- Model serving and deployment tracking
- Performance metrics logging
-
Zero-Configuration Setup: Enable via environment variables
-
Automatic Logging: All bias tests automatically logged to configured platforms
-
Dashboard Links: Direct links from results to experiment dashboards
AI Bill of Materials (BOM)
- Standard SBOM format for AI models
- Component tracking and provenance
- Dependency analysis and vulnerability scanning
- Model lineage and version history
- Training data documentation
Regulatory Compliance
- EU AI Act Assessment: Automated compliance checking against EU AI Act requirements
- GDPR Compliance: Data protection and privacy compliance reporting
- DPDP Act (India): Digital Personal Data Protection Act compliance
- India AI Framework: NITI Aayog Responsible AI Guidelines compliance
- ISO/IEC 42001: AI Management System Standard compliance
- NIST AI RMF: Risk Management Framework alignment
- IEEE 7000: Ethical concerns process compliance
Risk Assessment
- Automated risk categorization (High/Medium/Low)
- Policy-based risk evaluation
- Compliance gap analysis
- Remediation recommendations
Evidence Collection
- Comprehensive audit trail generation
- Compliance documentation export
- Regulatory mapping and reporting
- Stakeholder communication materials
- Model registration and versioning
- Metadata management
- Performance tracking
- Bias history and trends
- Model comparison and benchmarking
- Lifecycle state management
- Live bias metrics monitoring
- Performance tracking
- Alert system for threshold violations
- Dashboard analytics
- Historical trend analysis
Backend Services (40+ API Route Modules)
- Core Governance: Authentication, Authorization, Policy Management
- Bias Detection Engine:
- Classic ML (Demographic Parity, Equalized Odds)
- Modern LLM (WEAT, SEAT, Minimal Pairs)
- Multimodal (Image, Audio, Video)
- Compliance Engine:
- India Stack: DPDP Act 2023, NITI Aayog Framework, Digital India Act
- Global: EU AI Act, GDPR, NIST AI RMF
- RAG System: Semantic search for regulatory documents
- FairMind Monitor:
- Real-time token analysis
- Live bias metric tracking
- Threshold-based alerting
- Automated Remediation: Code generation for bias mitigation
- MLOps Integration: Seamless connection with W&B and MLflow
Frontend Application (40+ Pages, 80+ Components)
- Dashboards: Main, Compliance, Real-time Monitoring
- Interactive Tools: Bias Testing, Remediation Generator, Policy Editor
- Visualizations: Real-time charts, Bias metric heatmaps, Compliance scorecards
- Evidence Management: Automated collection and reporting UI
Data Layer
- Supabase PostgreSQL: Primary relational storage for models, results, and users
- Redis: High-performance caching for real-time metrics
- Vector Store: Embeddings for regulatory RAG system
- File Storage: Artifacts, reports, and evidence documents
- Python 3.9+ (Backend)
- Node.js 18+ (Frontend)
- UV (Python package manager) - Installation Guide
- Bun (JavaScript runtime) - Installation Guide
# Clone the repository
git clone https://github.com/adhit-r/fairmind.git
cd fairmind
# Backend Setup
cd apps/backend
uv sync
cp config/env.example .env # Configure your environment
uv run python -m uvicorn api.main:app --reload --port 8000
# Frontend Setup (New Terminal)
cd ../frontend-new
bun install
bun run devAccess Points:
- Frontend: http://localhost:1111
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
Backend (apps/backend/.env):
# Database
DATABASE_URL=postgresql://user:password@localhost:5432/fairmind
# Cache (Optional)
REDIS_URL=redis://localhost:6379
# MLOps Integration (Optional)
WANDB_API_KEY=your_wandb_key
MLFLOW_TRACKING_URI=http://localhost:5000
# Security
SECRET_KEY=your-secret-key
JWT_SECRET=your-jwt-secret
# Environment
ENVIRONMENT=developmentFrontend (apps/frontend-new/.env.local):
NEXT_PUBLIC_API_URL=http://localhost:8000For comprehensive setup instructions, see:
- Setup Guide - Complete installation and configuration
- Quick Start Guide - 5-minute setup
- Model Registration Guide - Register and manage models
- India Compliance Guide - DPDP Act and India AI Framework compliance
Full interactive API documentation with request/response examples:
- Swagger UI: api.fairmind.xyz/docs
- ReDoc: api.fairmind.xyz/redoc
Bias Detection
POST /api/v1/bias/detect- Classic ML bias detectionPOST /api/v1/bias-v2/detect- Production-ready bias detectionPOST /api/v1/modern-bias/detect- LLM bias detection (WEAT, SEAT)POST /api/v1/multimodal-bias/image-detection- Image generation biasPOST /api/v1/multimodal-bias/audio-detection- Audio generation biasPOST /api/v1/multimodal-bias/video-detection- Video content bias
Remediation
POST /api/v1/bias/remediate- Generate remediation codeGET /api/v1/bias/remediation-strategies- List available strategies
MLOps Integration
GET /api/v1/mlops/status- Check integration statusPOST /api/v1/mlops/log-test- Manually log experimentsGET /api/v1/mlops/experiments- List logged experiments
Compliance and Governance
POST /api/v1/compliance/report- Generate compliance reportPOST /api/v1/aibom/generate- Create AI Bill of MaterialsGET /api/v1/compliance/frameworks- List supported frameworks
Model Management
GET /api/v1/core/models- List registered modelsPOST /api/v1/core/models- Register new modelGET /api/v1/core/models/{id}- Get model detailsPUT /api/v1/core/models/{id}- Update modelDELETE /api/v1/core/models/{id}- Delete model
Monitoring and Analytics
GET /api/v1/database/dashboard-stats- Dashboard statisticsGET /api/v1/monitoring/metrics- Real-time metricsGET /api/v1/analytics/trends- Historical trends
System
GET /health- Health check endpointGET /api/v1/system/info- System information
Total API Endpoints: 50+
For complete API reference, see API Documentation
| Page | Route | Description |
|---|---|---|
| Dashboard | /dashboard |
System overview, health metrics, recent activity |
| Bias Detection | /bias |
Upload datasets, configure tests, view classic ML bias metrics |
| Modern Bias | /modern-bias |
LLM bias detection interface (WEAT, SEAT, Minimal Pairs) |
| Multimodal Bias | /multimodal-bias |
Image, audio, video bias analysis |
| Test Results | /tests/[id] |
Detailed test analysis, W&B/MLflow links, JSON export |
| Remediation | /remediation |
Select strategies, generate Python code |
| Compliance Dashboard | /compliance-dashboard |
Policy management, report generation |
| AI BOM | /ai-bom |
Bill of Materials generation and tracking |
| Models | /models |
Model registry, versioning, lifecycle management |
| Monitoring | /monitoring |
Real-time metrics, alerts, performance tracking |
| Analytics | /analytics |
Performance analytics, trend analysis, insights |
| Settings | /settings |
MLOps configuration, profile management, preferences |
- Neobrutal Design System: Modern, bold UI design
- Responsive Layouts: Works on desktop, tablet, and mobile
- Real-Time Updates: Live metrics and status updates
- Interactive Visualizations: Charts and graphs for bias metrics
- Export Capabilities: JSON, CSV, PDF export options
- Deep Linking: Direct links to MLOps dashboards
- Dark Mode Support: Theme customization
- Accessibility: WCAG compliance (in progress)
Core Framework
- Python 3.9+
- FastAPI 0.121.1
- Uvicorn (ASGI server)
- Pydantic (data validation)
Machine Learning
- scikit-learn 1.7.2
- pandas 2.3.3
- numpy 2.3.4
- scipy 1.16.3
- transformers (HuggingFace)
Database & Storage
- SQLAlchemy 2.0.44 (ORM)
- Supabase (PostgreSQL production)
- SQLite (local development)
- Redis 7.0.1 (caching)
Authentication & Security
- JWT (JSON Web Tokens)
- bcrypt (password hashing)
- Security headers middleware
- Rate limiting
Integrations
- Supabase SDK
- Weights & Biases API
- MLflow tracking
- AWS S3 (boto3)
Testing
- pytest with coverage
- Playwright (E2E)
- Test coverage: 80%+ target
Core Framework
- Next.js 14.2.32
- React 18.3.1
- TypeScript 5.5.3
UI Libraries
- Radix UI (15+ components)
- Shadcn UI
- Neobrutalism design system
- Tailwind CSS 3.4.4
State & Data
- React Hooks
- React Hook Form 7.51.0
- Zod 3.23.8 (validation)
Visualization
- Recharts 2.12.0
- Tabler Icons
- Lucide React
Testing
- Playwright 1.44.0
- E2E test suite (11 test files)
Build Tools
- Bun (package manager)
- PostCSS
- Autoprefixer
Deployment
- Railway (backend hosting)
- Netlify (frontend hosting)
- Docker support
- Kubernetes configs
CI/CD
- GitHub Actions
- Automated testing
- Branch protection enabled
- Security scanning (CodeQL, Dependabot)
Monitoring
- Health check endpoints
- Structured logging
- Error tracking (Sentry)
fairmind/
├── apps/
│ ├── backend/ # FastAPI backend
│ │ ├── api/ # API routes (27 modules)
│ │ │ ├── routes/ # Route handlers
│ │ │ └── main.py # FastAPI application
│ │ ├── services/ # Business logic (17 modules)
│ │ ├── config/ # Configuration
│ │ ├── middleware/ # Security & request handling
│ │ ├── database/ # Database models and migrations
│ │ ├── tests/ # Test suite (21 files)
│ │ └── pyproject.toml # Python dependencies
│ │
│ ├── frontend-new/ # Next.js frontend
│ │ ├── src/
│ │ │ ├── app/ # Next.js app router (30+ pages)
│ │ │ ├── components/ # React components (60+)
│ │ │ └── lib/ # Utilities & API clients
│ │ ├── tests/ # E2E tests (Playwright)
│ │ └── package.json # Node dependencies
│ │
│ ├── website/ # Marketing site (Astro)
│ └── ml/ # ML utilities and experiments
│
├── docs/ # Documentation
│ ├── development/ # Development guides
│ ├── deployment/ # Deployment guides
│ ├── architecture/ # Architecture documentation
│ └── API_ENDPOINTS.md # API reference
│
├── scripts/ # Utility scripts
├── k8s/ # Kubernetes configurations
└── archive/ # Archived files and documentation
Backend Development
cd apps/backend
uv sync
uv run python -m uvicorn api.main:app --reload --port 8000Frontend Development
cd apps/frontend-new
bun install
bun run devBackend Tests
cd apps/backend
uv run pytest
uv run pytest --cov=api --cov-report=htmlFrontend E2E Tests
cd apps/frontend-new
bun run test
bun run test:uiBackend E2E Tests
cd apps/backend
uv run pytest tests/e2e/ -m e2e- Linting: Black, isort, flake8 (Python), ESLint (TypeScript)
- Type Checking: mypy (Python), TypeScript compiler
- Formatting: Black (Python), Prettier (TypeScript)
- Pre-commit Hooks: Automated code quality checks
See Contributing Guide for:
- Code style guidelines
- Commit message conventions
- Pull request process
- Testing requirements
Backend (Railway)
- Automatic deployments from main branch
- Environment variables configured in Railway dashboard
- Health checks enabled
- Logging and monitoring configured
Frontend (Netlify)
- Automatic deployments from main branch
- Build command:
bun run build - Environment variables in Netlify dashboard
- CDN distribution
# Build backend image
cd apps/backend
docker build -t fairmind-backend .
# Run backend
docker run -p 8000:8000 fairmind-backend
# Build frontend image
cd apps/frontend-new
docker build -t fairmind-frontend .
# Run frontend
docker run -p 3000:3000 fairmind-frontendKubernetes configurations available in k8s/ directory:
- Backend deployment
- Frontend deployment
- ConfigMaps and Secrets
- Ingress configuration
See Deployment Guide for detailed instructions.
FairMind is an open-source project and welcomes contributions from the community.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes following our coding standards
- Write or update tests as needed
- Commit your changes using conventional commit format
- Push to your branch (
git push origin feature/amazing-feature) - Open a Pull Request targeting the
mainbranch
- Follow the code style guidelines in CONTRIBUTING.md
- Write tests for new features
- Update documentation as needed
- Use conventional commit messages
- Ensure all tests pass before submitting
We have 21+ good first issues perfect for new contributors:
- All PRs require at least 1 review before merging
- Main branch is protected
- Automated tests must pass
- Code quality checks enforced
FairMind takes security seriously. We follow responsible disclosure practices.
- Email: security@fairmind.xyz
- Response Time: 24 hours
- Please do not report security vulnerabilities through public GitHub issues
- CodeQL for vulnerability detection
- Dependabot for dependency scanning
- Regular security audits
- Automated security checks in CI/CD
- JWT-based authentication
- Password hashing with bcrypt
- Security headers middleware
- Rate limiting
- Input validation and sanitization
- SQL injection prevention
- XSS protection
See Security Policy for complete security policy.
Completed
- Core AI governance features
- Modern LLM bias detection (WEAT, SEAT, Minimal Pairs)
- Multimodal bias analysis (Image, Audio, Video)
- MLOps integration (W&B, MLflow)
- Compliance reporting (EU AI Act, GDPR)
- AI BOM generation
- Production deployment
- Comprehensive testing (80%+ coverage)
- Documentation suite
In Progress
- CI/CD pipeline automation
- Frontend performance optimizations
- Security vulnerability remediation
- Accessibility improvements
Planned
- Mobile responsiveness
- Internationalization (i18n)
- Advanced analytics dashboard
- Enterprise features
See ROADMAP.md for detailed roadmap.
This project is licensed under the MIT License - see the LICENSE file for details.
Resources
Contact
- Repository: github.com/adhit-r/fairmind
- support email : adhi.r@fairmind.xyz
FairMind - Making AI fair, transparent, and accountable for everyone.
Built for the AI ethics community





