🔎 Super-scale your images and run experiments with Residual Dense and Adversarial Networks.
-
Updated
Dec 18, 2024 - Python
🔎 Super-scale your images and run experiments with Residual Dense and Adversarial Networks.
基于PaddlePaddle实现的语音识别,中文语音识别。项目完善,识别效果好。支持Windows,Linux下训练和预测,支持Nvidia Jetson开发板预测。
🔥🔥🔥🔥🔥🔥Docker NVIDIA Docker2 YOLOV5 YOLOX YOLO Deepsort TensorRT ROS Deepstream Jetson Nano TX2 NX for High-performance deployment(高性能部署)
GPU-ready Dockerfile to run Stability.AI stable-diffusion model v2 with a simple web interface. Includes multi-GPUs support.
⚡ Useful scripts when using TensorRT
Simple wrapper for docker-compose to use GPU enabled docker under nvidia-docker
Tensorflow in Docker on Mesos #tfmesos #tensorflow #mesos
A dockerized version of neural style transfer algorithms
Docker environment for fast.ai Deep Learning Course 1 at http://course.fast.ai
A tool for running deep learning algorithms for semantic segmentation with satellite imagery
Speech synthesis (TTS) in low-resource languages by training from scratch with Fastpitch and fine-tuning with HifiGan
Workflow that shows how to train neural networks on EC2 instances with GPU support and compares training times to CPUs
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
Cloud based, GPU accelerated Simulated Annealing
Real-time GPU insights via a sleek web interface. Web interface for nvidia-smi
NGC Container Replicator
The swiss army knife for extracting optical flow
Code Server Docker image for PyTorch with python development on the browser. Includes CUDA!
The ChIP-Seq peak calling algorithm using convolution neural networks
Project demonstrates the power and simplicity of NVIDIA NIM (NVIDIA Inference Model), a suite of optimized cloud-native microservices, by setting up and running a Retrieval-Augmented Generation (RAG) pipeline.
Add a description, image, and links to the nvidia-docker topic page so that developers can more easily learn about it.
To associate your repository with the nvidia-docker topic, visit your repo's landing page and select "manage topics."