TensorRT-LLMs/cpp/tensorrt_llm/kernels/cutlass_kernels/include
Neta Zmora 34dc6869f3
[#8732][feat] Update TRTLLM Cutlass MoE kernels with ReLU2 (#9011)
Update TRTLLM Cutlass MoE kernels with ReLU2 activation.

Nemotron-6 requires ReLU2 (i.e. squared ReLU) MoE activation function.
The PR adds this and adds an API to set the activation function, in general.
The ReLU2 changes are based on this FlashInfer PR: https://github.com/flashinfer-ai/flashinfer/pull/1954.

The PR also updates the Auto Deploy MoE backend for 16-bit and FP8 from
Triton (`torch.ops.auto_deploy.triton_moe_fused`, `torch.ops.auto_deploy.triton_quant_fp8_moe`) to TRTLLM/Cutlass (`torch.ops.auto_deploy.trtllm_moe_fused`, `torch.ops.auto_deploy.trtllm_quant_fp8_moe_fused`).

Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
Co-authored-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
2025-11-13 16:54:45 -08:00
..
allreduce_gemm_runner.h opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
common.h [#8732][feat] Update TRTLLM Cutlass MoE kernels with ReLU2 (#9011) 2025-11-13 16:54:45 -08:00
cutlass_kernel_selector.h opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222) 2025-06-26 12:18:19 +08:00
fp4_gemm.h feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867) 2025-06-16 11:30:57 +08:00
low_latency_gemm.h refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
moe_gemm_kernels.h [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
moe_kernels.h [None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501) 2025-10-27 10:18:19 +08:00
moe_util_kernels.h [TRTLLM-7319][perf] Fuse slicing into MoE. (#6728) 2025-08-25 16:52:30 -04:00