TensorRT-LLMs/tensorrt_llm/_torch/modules
Mike Iovine e6f7ff3a46
[chore] Make llama4 MoE use maybe_execute_in_parallel (#3779)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-04-28 10:58:03 -04:00
..
mamba Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
decoder_layer.py chore: Use ellipsis as default value to detect whether residual argument is provided (#3626) 2025-04-17 12:31:58 +08:00
embedding.py feat: llama4 input processor (#3383) 2025-04-25 16:47:14 -07:00
fused_moe.py fix bug of create cuda stream as default parameter which will be init… (#3764) 2025-04-28 08:16:03 +08:00
gated_mlp.py Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
linear.py Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
logits_procesor.py Update (#2978) 2025-03-23 16:39:35 +08:00
mlp.py Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
multi_stream_utils.py [chore] Make llama4 MoE use maybe_execute_in_parallel (#3779) 2025-04-28 10:58:03 -04:00
rms_norm.py fix: llama4: address couple of issues in llama4 attention module (#3491) 2025-04-18 01:54:59 +00:00
rotary_embedding.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00