Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.
-
Updated
Feb 27, 2025 - C++
Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.
Vulkan & GLSL implementation of FlashAttention-2
Poplar implementation of FlashAttention for IPU
Add a description, image, and links to the flash-attention-2 topic page so that developers can more easily learn about it.
To associate your repository with the flash-attention-2 topic, visit your repo's landing page and select "manage topics."