TensorRT-LLMs/cpp/tensorrt_llm/kernels/cutlass_kernels/include
Xiaowei Wang 32dfdfba30
feat: fuse w4a8 moe pre-quant scale on Hopper (#5613)
Signed-off-by: Xiaowei Wang <100599594+xiaoweiw-nv@users.noreply.github.com>
2025-07-01 23:02:41 -04:00
..
allreduce_gemm_runner.h opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
common.h refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
cutlass_kernel_selector.h opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
fp4_gemm.h feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867) 2025-06-16 11:30:57 +08:00
low_latency_gemm.h refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
moe_gemm_kernels.h opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
moe_kernels.h [TRTLLM-5965] perf: Optimize MoE sort kernels for large-scale EP (#5435) 2025-06-30 01:02:07 +08:00
moe_util_kernels.h feat: fuse w4a8 moe pre-quant scale on Hopper (#5613) 2025-07-01 23:02:41 -04:00