Skip to content
@Dao-AILab

Dao AI Lab

We are an AI research group led by Prof. Tri Dao

Popular repositories Loading

  1. flash-attention flash-attention Public

    Fast and memory-efficient exact attention

    Python 20.8k 2.2k

  2. quack quack Public

    A Quirky Assortment of CuTe Kernels

    Python 678 61

  3. causal-conv1d causal-conv1d Public

    Causal depthwise conv1d in CUDA, with a PyTorch interface

    Cuda 657 141

  4. fast-hadamard-transform fast-hadamard-transform Public

    Fast Hadamard transform in CUDA, with a PyTorch interface

    C 261 49

  5. grouped-latent-attention grouped-latent-attention Public

    Python 132 3

  6. gemm-cublas gemm-cublas Public

    Python 23 1

Repositories

Showing 7 of 7 repositories
  • flash-attention Public

    Fast and memory-efficient exact attention

    Dao-AILab/flash-attention’s past year of commit activity
    Python 20,821 BSD-3-Clause 2,175 891 83 Updated Nov 26, 2025
  • quack Public

    A Quirky Assortment of CuTe Kernels

    Dao-AILab/quack’s past year of commit activity
    Python 678 Apache-2.0 61 12 1 Updated Nov 22, 2025
  • causal-conv1d Public

    Causal depthwise conv1d in CUDA, with a PyTorch interface

    Dao-AILab/causal-conv1d’s past year of commit activity
    Cuda 657 BSD-3-Clause 141 33 10 Updated Oct 20, 2025
  • fast-hadamard-transform Public

    Fast Hadamard transform in CUDA, with a PyTorch interface

    Dao-AILab/fast-hadamard-transform’s past year of commit activity
    C 261 BSD-3-Clause 49 8 2 Updated Oct 20, 2025
  • cutlass Public Forked from NVIDIA/cutlass

    CUDA Templates for Linear Algebra Subroutines

    Dao-AILab/cutlass’s past year of commit activity
    C++ 1 1,566 0 0 Updated Jun 8, 2025
  • Dao-AILab/grouped-latent-attention’s past year of commit activity
    Python 132 MIT 3 5 0 Updated May 29, 2025
  • gemm-cublas Public
    Dao-AILab/gemm-cublas’s past year of commit activity
    Python 23 Apache-2.0 1 0 0 Updated May 4, 2025

Most used topics

Loading…