TensorRT-LLMs/tensorrt_llm/layers
shivghai ee07a7c55e
[None][fix] [Gemma3] Fix RoPE for local attention for Gemma3 (#9961)
Signed-off-by: Shiv Ghai <8965168+shivghai@users.noreply.github.com>
2025-12-27 11:50:59 -08:00
..
__init__.py
activation.py
attention.py [None][fix] [Gemma3] Fix RoPE for local attention for Gemma3 (#9961) 2025-12-27 11:50:59 -08:00
cast.py
conv.py
embedding.py fix: #3137 speculative decoding and multimodal input support (#3276) 2025-04-09 23:40:19 +08:00
language_adapter.py
linear.py
lora.py
mlp.py
moe.py [#9236][feature] Make sharing of activation_type across SW layers more robust (#9238) 2025-11-20 16:06:58 +08:00
normalization.py
pooling.py
recurrent.py
ssm.py