TensorRT-LLMs/tensorrt_llm/layers
Neta Zmora 1d6fbbf45d
[#9236][feature] Make sharing of activation_type across SW layers more robust (#9238)
C++, Python and Python MoE layer all share the definition of ActivationType.
Currently this is done thru redefinition which is fragile and can break when adding new activation function types.

tensorrt_llm/_torch/utils.py
cpp/tensorrt_llm/kernels/cutlass_kernels/include/common.h
=>
tensorrt_llm/layers/moe.py
cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu

Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-11-20 16:06:58 +08:00
..
__init__.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
activation.py Update TensorRT-LLM (#787) 2024-01-02 17:54:32 +08:00
attention.py [https://nvbugs/5302040][feat] Add whisper support (Bert Attention on SM100 and GPTAttention for cross attention on SM100) (#5527) 2025-08-13 11:19:13 -07:00
cast.py Update TensorRT-LLM (#787) 2024-01-02 17:54:32 +08:00
conv.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
embedding.py fix: #3137 speculative decoding and multimodal input support (#3276) 2025-04-09 23:40:19 +08:00
language_adapter.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
linear.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
lora.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
mlp.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
moe.py [#9236][feature] Make sharing of activation_type across SW layers more robust (#9238) 2025-11-20 16:06:58 +08:00
normalization.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
pooling.py Update TensorRT-LLM (#787) 2024-01-02 17:54:32 +08:00
recurrent.py Update TensorRT-LLM (#1954) 2024-07-16 15:30:25 +08:00
ssm.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00