TensorRT-LLMs/cpp/tensorrt_llm
Chang Liu 1d3a5d38af
fix: Update FP8 sf layout for Blackwell and relax blockwise GEMM assertions (#3144)
* Update fp8 sf layout for blackwell and enable fp8 gemm e2e

* Add test case when m needs to be padded

* Better comment

Signed-off-by: Chang Liu <liuc@nvidia.com>

* Add TODO for fp8 quant kernel

Signed-off-by: Chang Liu <liuc@nvidia.com>

* Enable DCO check

Signed-off-by: Chang Liu <liuc@nvidia.com>

* Fix lint

---------

Signed-off-by: Chang Liu <liuc@nvidia.com>
2025-04-01 13:08:29 -07:00
..
batch_manager refactor: Simplify disableLookahead and improve numDecodingEngineTokens handling (#3103) 2025-04-01 18:47:31 +08:00
common fix: fix for cp > kvHeadNum (#3002) 2025-03-26 12:39:02 +08:00
cutlass_extensions/include/cutlass_extensions feat: Update cutlass (#2981) 2025-03-26 22:36:27 +08:00
executor feat: Add BW measurement (#3070) 2025-03-28 10:53:00 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels chore: cutlass cleanup (#3165) 2025-04-01 13:57:38 +08:00
layers v1.2 (#3082) 2025-03-26 23:31:29 +08:00
plugins chore: cutlass cleanup (#3165) 2025-04-01 13:57:38 +08:00
pybind Revert "refactor: Replace DecoderFinishedEvent with CudaEvent in decoder clas…" (#3183) 2025-04-01 12:49:27 +08:00
runtime refactor: Simplify disableLookahead and improve numDecodingEngineTokens handling (#3103) 2025-04-01 18:47:31 +08:00
thop fix: Update FP8 sf layout for Blackwell and relax blockwise GEMM assertions (#3144) 2025-04-01 13:08:29 -07:00
CMakeLists.txt Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00