TensorRT-LLMs/cpp/tensorrt_llm/kernels/cuteDslKernels
Neta Zmora 1d6fbbf45d
[#9236][feature] Make sharing of activation_type across SW layers more robust (#9238)
C++, Python and Python MoE layer all share the definition of ActivationType.
Currently this is done thru redefinition which is fragile and can break when adding new activation function types.

tensorrt_llm/_torch/utils.py
cpp/tensorrt_llm/kernels/cutlass_kernels/include/common.h
=>
tensorrt_llm/layers/moe.py
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-11-20 16:06:58 +08:00
..
CMakeLists.txt [TRTLLM-9286][feat] Integration of CuteDSL NVFP4 grouped GEMM (#8880) 2025-11-18 17:40:12 -08:00
moeUtils.cu [#9236][feature] Make sharing of activation_type across SW layers more robust (#9238) 2025-11-20 16:06:58 +08:00
moeUtils.h [TRTLLM-9286][feat] Integration of CuteDSL NVFP4 grouped GEMM (#8880) 2025-11-18 17:40:12 -08:00