TensorRT-LLMs/cpp/tensorrt_llm/kernels/cutlass_kernels
Jinyang Yuan 5339d367ce
[perf] Reduce the workspace size of FP4 activation scales for MoE (#4303)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
2025-05-30 09:03:52 +08:00
..
allreduce_gemm chore: guardword clean for header file. (#4540) 2025-05-23 10:08:14 +08:00
fp8_blockscale_gemm [perf] Reduce the workspace size of FP4 activation scales for MoE (#4303) 2025-05-30 09:03:52 +08:00
fp8_rowwise_gemm feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
fpA_intB_gemm Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fused_gated_gemm Update TensorRT-LLM (#2460) 2024-11-19 18:30:34 +08:00
int8_gemm feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
python [TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper (#4123) 2025-05-14 15:48:07 +08:00
CMakeLists.txt chroe:clean useless flag (#4567) 2025-05-23 07:05:15 +08:00
cutlass_heuristic.cpp [TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper (#4123) 2025-05-14 15:48:07 +08:00
cutlass_heuristic.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
cutlass_preprocessors.cpp feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
cutlass_preprocessors.h Update TensorRT-LLM (#1492) 2024-04-24 14:44:22 +08:00
cutlass_type_conversion.h chore: cutlass cleanup (#3165) 2025-04-01 13:57:38 +08:00