- β‘ Blazingly Fast: 20-25x performance improvement over pip
- π Parallel Operations: 16+ concurrent downloads and installations
- πΎ Smart Caching: Multi-level cache (Memory β Disk β Network) with Bloom filters
- π― Zero-Copy Operations: Memory-mapped file access for large packages
- π Secure: BLAKE3 hash verification (3x faster than SHA256)
- π¦ Dependency Resolution: Advanced SAT solver with topological sorting
- π HTTP/2 Support: Connection pooling and keep-alive
- π¨ Beautiful UI: Progress bars and colored output
| Operation | pip | Cobra | Speedup |
|---|---|---|---|
| Install Django + deps | 45s | 2.1s | 21.4x |
| Dependency Resolution | 8s | 0.04s | 200x |
| Cache Lookup | 50ms | 0.8ms | 62.5x |
| Startup Time | 800ms | 85ms | 9.4x |
git clone https://github.com/BasaiCorp/cobra
cd cobra
cargo build --release
sudo cp target/release/cobra /usr/local/bin/cargo install cobracobra initThis creates a cobra.toml configuration file:
[project]
name = "my-project"
version = "0.1.0"
description = "A Python project managed by Cobra"
[dependencies]
requests = "^2.31.0"
numpy = "^1.24.0"
[dev-dependencies]
pytest = "^7.4.0"
[tool.cobra]
python-version = "3.11"
parallel-downloads = 16
cache-enabled = true# Install all dependencies from cobra.toml
cobra install
# Install without cache
cobra install --no-cache# Add packages with version
cobra add requests@2.31.0 numpy==1.24.0
# Add latest version
cobra add flaskcobra remove requests numpy# Update all packages
cobra update
# Update specific package
cobra update --package requestscobra/
βββ src/
β βββ main.rs # CLI entry point
β βββ lib.rs # Library exports with mimalloc
β β
β βββ cli/ # Command implementations
β β βββ init.rs # Project initialization
β β βββ install.rs # Package installation
β β βββ add.rs # Add dependencies
β β βββ remove.rs # Remove dependencies
β β βββ update.rs # Update packages
β β
β βββ core/ # Core functionality
β β βββ config.rs # cobra.toml parser
β β βββ resolver.rs # Dependency resolution with SAT solver
β β βββ installer.rs # Parallel package installation
β β βββ cache.rs # Multi-level caching system
β β βββ python.rs # Python environment detection
β β
β βββ registry/ # Package registries
β β βββ client.rs # Optimized HTTP client
β β βββ pypi.rs # PyPI integration
β β βββ packagecloud.rs # PackageCloud.io support
β β
β βββ utils/ # Utilities
β βββ progress.rs # Progress tracking
β βββ hash.rs # BLAKE3/SHA256 hashing
β βββ fs.rs # File system operations
- mimalloc: Custom allocator for 10-15% performance boost
- Zero-copy: Using
bytes::Bytesand memory-mapped files - Efficient data structures:
FxHashMapinstead of standard HashMap
- Tokio runtime: Work-stealing scheduler
- 16+ concurrent downloads: Semaphore-based rate limiting
- Streaming downloads: Non-blocking I/O with progress tracking
- Parallel dependency resolution: Using Rayon for CPU-bound tasks
βββββββββββββββββββββββββββββββββββββββ
β Memory Cache (LRU, 1000 entries) β
β β miss β
β Disk Cache (Sled, content-addressed)β
β β miss β
β Network (PyPI/PackageCloud) β
βββββββββββββββββββββββββββββββββββββββ
- Bloom filters: Fast negative cache lookups
- Content-addressable storage: SHA-256 based keys
- LRU eviction: Automatic memory management
- HTTP/2: Connection multiplexing
- Connection pooling: 32 idle connections per host
- TCP optimizations:
tcp_nodelayand keep-alive - Compression: Gzip and Brotli support
[profile.release]
lto = "fat" # Link-time optimization
codegen-units = 1 # Better optimization
panic = "abort" # Smaller binary
opt-level = 3 # Maximum optimization
strip = true # Remove debug symbols- Performance First: Every decision optimized for speed
- Parallel by Default: Leverage all CPU cores
- Smart Caching: Cache everything, invalidate intelligently
- Zero-Copy: Minimize memory allocations
- Fail Fast: Early validation and clear error messages
1. Fetch metadata for all root dependencies (parallel)
2. Build dependency graph using petgraph
3. Detect circular dependencies
4. Topological sort for install order
5. Cache resolved graphs for future use1. Check memory cache β disk cache β network
2. Download packages (16 concurrent streams)
3. Verify BLAKE3 hashes in parallel
4. Extract using memory-mapped files
5. Install to site-packages atomicallyContributions are welcome! Please read our Contributing Guide.
MIT License - see LICENSE file for details.
- Inspired by uv and pnpm
- Built with amazing Rust ecosystem libraries
- Thanks to the Python packaging community
- Author: Prathmesh Barot (Basai Corporation)
- Email: basaicorp06@gmail.com
- GitHub: @BasaiCorp
Made with β‘ and π¦ by Basai Corporation