TensorRT-LLMs/cpp/tensorrt_llm
Neta Zmora 1d6fbbf45d
[#9236][feature] Make sharing of activation_type across SW layers more robust (#9238)
C++, Python and Python MoE layer all share the definition of ActivationType.
Currently this is done thru redefinition which is fragile and can break when adding new activation function types.

tensorrt_llm/_torch/utils.py
cpp/tensorrt_llm/kernels/cutlass_kernels/include/common.h
=>
tensorrt_llm/layers/moe.py
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-11-20 16:06:58 +08:00
..
batch_manager [#8476][chore] Update license (#8807) 2025-11-19 15:05:25 -08:00
common [None][feat] Add TRTLLM_NIXL_KVCACHE_BACKEND environment variable for NIXL backend selection (#9075) 2025-11-17 15:39:55 -08:00
cutlass_extensions/include/cutlass_extensions [#8732][feat] Add ReLU2 to TRTLLM Cutlass MoE BF16 kernels (#9191) 2025-11-17 20:30:00 -08:00
deep_ep [TRTLLM-6589][feat] Support CUDA graph for DeepEP (#7514) 2025-10-02 10:13:24 -07:00
deep_gemm [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00
executor [#8476][chore] Update license (#8807) 2025-11-19 15:05:25 -08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
flash_mla [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
kernels [#9236][feature] Make sharing of activation_type across SW layers more robust (#9238) 2025-11-20 16:06:58 +08:00
layers [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
nanobind [None][feat] Have ability to cancel disagg request if KV cache resource are exhausted (#9155) 2025-11-18 20:59:17 -05:00
plugins [None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501) 2025-10-27 10:18:19 +08:00
pybind [None][feat] Have ability to cancel disagg request if KV cache resource are exhausted (#9155) 2025-11-18 20:59:17 -05:00
runtime [None][refactor] decoding inputs, part 2 (#5799) 2025-11-18 14:38:51 +01:00
testing fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
thop [None][perf] Adjust select_alltoall_method_type. (#8950) 2025-11-19 07:43:55 -08:00
CMakeLists.txt [TRTLLM-9286][feat] Integration of CuteDSL NVFP4 grouped GEMM (#8880) 2025-11-18 17:40:12 -08:00