TensorRT-LLMs/cpp/tensorrt_llm/kernels/cutlass_kernels
xavier-nvidia b6013da198
Fix GEMM+AR fusion on blackwell (#5563)
Signed-off-by: xsimmons <xsimmons@nvidia.com>
2025-07-09 08:48:47 +08:00
..
allreduce_gemm Fix GEMM+AR fusion on blackwell (#5563) 2025-07-09 08:48:47 +08:00
fp4_gemm [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
fp8_blockscale_gemm feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
fp8_rowwise_gemm feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
fpA_intB_gemm opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
fused_gated_gemm feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
include feat: Add support for MXFP8xMXFP4 in pytorch (#5535) 2025-07-06 15:32:06 -07:00
int8_gemm feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
low_latency_gemm feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
moe_gemm [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
python [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
CMakeLists.txt Fix GEMM+AR fusion on blackwell (#5563) 2025-07-09 08:48:47 +08:00
cutlass_heuristic.cpp [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
cutlass_heuristic.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
cutlass_preprocessors.cpp [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
cutlass_preprocessors.h Update TensorRT-LLM (#1492) 2024-04-24 14:44:22 +08:00
cutlass_type_conversion.h chore: cutlass cleanup (#3165) 2025-04-01 13:57:38 +08:00