TensorRT-LLMs/cpp/tensorrt_llm/kernels/cutlass_kernels
xavier-nvidia 200ea9ee81
fix TMA error with GEMM+AR on TP=2 (#6075)
Signed-off-by: Xavier Simmons <xsimmons@nvidia.com>
2025-07-18 10:26:08 +08:00
..
allreduce_gemm fix TMA error with GEMM+AR on TP=2 (#6075) 2025-07-18 10:26:08 +08:00
fp4_gemm [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
fp8_blockscale_gemm [fix] fix tileN cannot % 16==0 & support sm89 deepgemm bmm (#5531) 2025-07-10 15:16:18 +08:00
fp8_rowwise_gemm feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
fpA_intB_gemm opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
fused_gated_gemm feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
include feat: Add support for benchmarking individual gemms in MOE benchmark (#6080) 2025-07-18 09:00:12 +12:00
int8_gemm feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
low_latency_gemm feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
moe_gemm perf: Enable 128x256 tile shapes for FP4 MOE CUTLASS backend (#5986) 2025-07-14 14:04:15 -07:00
python perf: Enable 128x256 tile shapes for FP4 MOE CUTLASS backend (#5986) 2025-07-14 14:04:15 -07:00
CMakeLists.txt Fix : fix moe regression for sm120 (#5823) 2025-07-09 21:25:11 +08:00
cutlass_heuristic.cpp perf: Enable 128x256 tile shapes for FP4 MOE CUTLASS backend (#5986) 2025-07-14 14:04:15 -07:00
cutlass_heuristic.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
cutlass_preprocessors.cpp [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
cutlass_preprocessors.h Update TensorRT-LLM (#1492) 2024-04-24 14:44:22 +08:00
cutlass_type_conversion.h chore: cutlass cleanup (#3165) 2025-04-01 13:57:38 +08:00