TensorRT-LLMs/cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention
Pamela Peng 6cdfc54883
feat: Add FP8 support for SM 120 (#3248)
* Allow FP8 on SM120

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

* fix sm121

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

* fix

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

* fix pre-commit

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

* review update

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>

---------

Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
2025-04-14 16:05:41 -07:00
..
cubin feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
CMakeLists.txt Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
fmhaPackedMask.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
fmhaPackedMask.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
fmhaRunner.cpp feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
fmhaRunner.h Update TensorRT-LLM (#2363) 2024-10-22 20:27:35 +08:00
fused_multihead_attention_common.h feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
fused_multihead_attention_v2.cpp feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
fused_multihead_attention_v2.h feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
tmaDescriptor.h Update TensorRT-LLM (#1274) 2024-03-12 18:15:52 +08:00