TensorRT-LLMs/tensorrt_llm/_torch/modules
Jinyang Yuan 992d513bc6
feat: Optionally split MoE inputs into chunks to reduce GPU memory usage (#3104)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Co-authored-by: raccoonliukai <raccoonliu@tencent.com>
2025-04-01 16:07:02 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py Update (#2978) 2025-03-23 16:39:35 +08:00
decoder_layer.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
embedding.py Update (#2978) 2025-03-23 16:39:35 +08:00
fused_moe.py feat: Optionally split MoE inputs into chunks to reduce GPU memory usage (#3104) 2025-04-01 16:07:02 +08:00
gated_mlp.py Update (#2978) 2025-03-23 16:39:35 +08:00
linear.py Refactor imports inside tensorrt_llm._torch. (#3015) 2025-03-26 11:01:07 +08:00
logits_procesor.py Update (#2978) 2025-03-23 16:39:35 +08:00
mamba.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
mlp.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
rms_norm.py Update (#2978) 2025-03-23 16:39:35 +08:00
rotary_embedding.py Update (#2978) 2025-03-23 16:39:35 +08:00