TensorRT-LLMs/cpp/tensorrt_llm/kernels/cutlass_kernels
jiahanc 1d3b98b920
perf: Optimize quantization kernels used in DeepSeek on Hopper (#3466)
Signed-off-by: jiahanc <jiahanc@nvidia.com>
2025-04-15 17:49:57 +08:00
..
allreduce_gemm Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
fp8_blockscale_gemm perf: Optimize quantization kernels used in DeepSeek on Hopper (#3466) 2025-04-15 17:49:57 +08:00
fp8_rowwise_gemm feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
fpA_intB_gemm Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fused_gated_gemm Update TensorRT-LLM (#2460) 2024-11-19 18:30:34 +08:00
int8_gemm feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
python feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
CMakeLists.txt feat: register ENABLE_MULTI_DEVICE and ENABLE_UCX as CMake options (#3343) 2025-04-14 10:30:23 +08:00
cutlass_heuristic.cpp feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
cutlass_heuristic.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
cutlass_preprocessors.cpp feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
cutlass_preprocessors.h Update TensorRT-LLM (#1492) 2024-04-24 14:44:22 +08:00
cutlass_type_conversion.h chore: cutlass cleanup (#3165) 2025-04-01 13:57:38 +08:00