TensorRT-LLMs/tensorrt_llm/_torch
Neta Zmora 34dc6869f3
[#8732][feat] Update TRTLLM Cutlass MoE kernels with ReLU2 (#9011)
Update TRTLLM Cutlass MoE kernels with ReLU2 activation.

Nemotron-6 requires ReLU2 (i.e. squared ReLU) MoE activation function.
The PR adds this and adds an API to set the activation function, in general.
The ReLU2 changes are based on this FlashInfer PR: https://github.com/flashinfer-ai/flashinfer/pull/1954.

The PR also updates the Auto Deploy MoE backend for 16-bit and FP8 from
Triton (`torch.ops.auto_deploy.triton_moe_fused`, `torch.ops.auto_deploy.triton_quant_fp8_moe`) to TRTLLM/Cutlass (`torch.ops.auto_deploy.trtllm_moe_fused`, `torch.ops.auto_deploy.trtllm_quant_fp8_moe_fused`).

Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
Co-authored-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
2025-11-13 16:54:45 -08:00
..
attention_backend [None][fix] Clear indexer k cache reference before release cuda memory (#9110) 2025-11-12 22:12:53 -08:00
auto_deploy [#8732][feat] Update TRTLLM Cutlass MoE kernels with ReLU2 (#9011) 2025-11-13 16:54:45 -08:00
compilation [https://nvbugs/5550409][fix] Disable torch compile in piecewise attention part to Avoid host overhead (#8708) 2025-10-29 18:12:58 +08:00
configs [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
custom_ops [#8732][feat] Update TRTLLM Cutlass MoE kernels with ReLU2 (#9011) 2025-11-13 16:54:45 -08:00
cute_dsl_kernels [TRTLLM-6898][feat] Add swapab, tileN64, cga sync support for cute dsl nvfp4 gemm (#7764) 2025-09-18 21:20:04 +08:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
distributed [None][feat] MNNVLAllreduce Kernel Refactor (#8018) 2025-11-05 08:49:47 +08:00
models [None][fix] Fix the aux_stream in Llama4MinLatencyFusedMoE (#9035) 2025-11-13 09:09:52 -08:00
modules [None][feat] Enable EPLB for trtllm-gen and cutlass backend (#8886) 2025-11-12 12:30:27 -08:00
peft [TRTLLM-7346][fix] Improve performance of PyTorchModelEngine._get_lora_params_from_requests (#7033) 2025-08-25 10:37:40 +03:00
pyexecutor [https://nvbugs/5652552][fix] Log the llm args for main branch (#9120) 2025-11-14 07:43:21 +08:00
shared_tensor [1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396) 2025-07-10 05:12:53 +09:00
speculative [TRTLLM-8084][feat] Enhance the overlap shceduler for two-model spec decoding (#8706) 2025-11-13 10:20:16 -05:00
__init__.py [TRTLLM-9212][chore] move MoeLoadBalancerConfig to llm_args.py (#9002) 2025-11-13 10:47:35 +08:00
autotuner.py [None][fix] support topk autotuner input for expert slot per group larger than 32 (#9087) 2025-11-14 08:37:20 +08:00
cublaslt_utils.py [https://nvbugs/5451205][feat] Add cuBLASLt NVFP4 GEMM backend support (#7943) 2025-10-23 15:55:10 +08:00
cute_dsl_utils.py [None][chore] polish error message in cute_dsl_utils.py (#7852) 2025-09-19 12:05:11 +08:00
device_mesh.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
expert_statistic.py Add MTP support for Online EPLB (#5213) 2025-06-25 07:58:13 +08:00
flashinfer_utils.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
hostfunc.py [TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948) 2025-09-03 15:16:11 -07:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
memory_buffer_utils.py [TRTLLM-8690][feat] add more tensors to share buffers (#8691) 2025-11-03 21:08:01 -08:00
metadata.py [None][feat] Use Separate QKV Input Layout for Context MLA (#6538) 2025-08-19 22:04:48 +08:00
model_config.py [TRTLLM-9212][chore] move MoeLoadBalancerConfig to llm_args.py (#9002) 2025-11-13 10:47:35 +08:00
utils.py [TRTLLM-9198][perf] Add torch.compile + multi-stream support for k-cache scatter and weight scaling (#8988) 2025-11-11 12:33:30 +08:00
virtual_memory.py [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00