TensorRT-LLMs/tensorrt_llm/layers
Neta Zmora 1d6fbbf45d
[#9236][feature] Make sharing of activation_type across SW layers more robust (#9238)
C++, Python and Python MoE layer all share the definition of ActivationType.
Currently this is done thru redefinition which is fragile and can break when adding new activation function types.

tensorrt_llm/_torch/utils.py
cpp/tensorrt_llm/kernels/cutlass_kernels/include/common.h
=>
tensorrt_llm/layers/moe.py
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-11-20 16:06:58 +08:00
..
__init__.py
activation.py
attention.py
cast.py
conv.py
embedding.py
language_adapter.py
linear.py
lora.py
mlp.py
moe.py [#9236][feature] Make sharing of activation_type across SW layers more robust (#9238) 2025-11-20 16:06:58 +08:00
normalization.py
pooling.py
recurrent.py
ssm.py