TensorRT-LLMs/cpp/tensorrt_llm/kernels/cutlass_kernels
Barry Kang 20b42912ce
[TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper (#4123)
Support DeepSeek-R1 W4A8 on Hopper

Co-authored-by: Barry Kang <43644113+Barry-Delaney@users.noreply.github.com>
Co-authored-by: Jiang Shao <91270701+StudyingShao@users.noreply.github.com>
Signed-off-by: Barry Kang <43644113+Barry-Delaney@users.noreply.github.com>
2025-05-14 15:48:07 +08:00
..
allreduce_gemm feat: support add internal cutlass kernels as subproject (#3658) 2025-05-06 11:35:07 +08:00
fp8_blockscale_gemm perf: Optimize quantization kernels used in DeepSeek on Hopper (#3466) 2025-04-15 17:49:57 +08:00
fp8_rowwise_gemm feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
fpA_intB_gemm Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fused_gated_gemm Update TensorRT-LLM (#2460) 2024-11-19 18:30:34 +08:00
int8_gemm feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
python [TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper (#4123) 2025-05-14 15:48:07 +08:00
CMakeLists.txt feat: register ENABLE_MULTI_DEVICE and ENABLE_UCX as CMake options (#3343) 2025-04-14 10:30:23 +08:00
cutlass_heuristic.cpp [TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper (#4123) 2025-05-14 15:48:07 +08:00
cutlass_heuristic.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
cutlass_preprocessors.cpp feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
cutlass_preprocessors.h Update TensorRT-LLM (#1492) 2024-04-24 14:44:22 +08:00
cutlass_type_conversion.h chore: cutlass cleanup (#3165) 2025-04-01 13:57:38 +08:00