TensorRT-LLMs/cpp
Gabriel Wu 2e0cd7922e
fix: add SM90 guard for FP8 Blockscale GEMM (#3575)
* fix: add SM90 guard for FP8 Blockscale GEMM

Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>

* fix: add SM90 guard for FP8 Blockscale GEMM

Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>

---------

Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
Co-authored-by: Tao Li @ NVIDIA <tali@nvidia.com>
2025-04-16 14:44:37 +08:00
..
cmake fix #3109: early exit cmake if find_library() does not find any lib (#3113) 2025-03-29 19:59:03 +08:00
include/tensorrt_llm fix: add SM90 guard for FP8 Blockscale GEMM (#3575) 2025-04-16 14:44:37 +08:00
micro_benchmarks feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
tensorrt_llm fix: disable KV cache reuse if using attention sink (#3021) 2025-04-16 03:07:32 +08:00
tests chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
CMakeLists.txt Revert "infra: move nvrtc_wrapper to conan (#3282)" (#3573) 2025-04-15 22:45:13 +08:00