TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels
Perkz Zheng 3d87770e15
[https://nvbugspro.nvidia.com/bug/5295470] support headDim 256 for blackwell fmha kernels (#5164)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-06-13 23:01:01 +08:00
..
batchedGemm [TRTLLM-5589] feat: Integrate TRT-LLM Gen FP8 Batched GEMM with Pytorch workflow kernel autotuner (#4872) 2025-06-09 11:02:48 +01:00
blockScaleMoe [fix] Fix comment to pass guardwords check (#5191) 2025-06-13 15:49:59 +08:00
fmha [https://nvbugspro.nvidia.com/bug/5295470] support headDim 256 for blackwell fmha kernels (#5164) 2025-06-13 23:01:01 +08:00
gemm feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00
gemmGatedAct feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00
CMakeLists.txt feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00