TensorRT-LLMs/tensorrt_llm/_torch/modules
Zongfei Jing 6d1f2d0fd7
[TRTLLM-3927] [feat] Finalize + Allreduce + add + rmsnorm fusion (#4756)
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
2025-06-10 19:55:16 +08:00
..
fused_moe [TRTLLM-3927] [feat] Finalize + Allreduce + add + rmsnorm fusion (#4756) 2025-06-10 19:55:16 +08:00
mamba [nvbug 5325284][fix] Increase Nemotron-H warmup request robustness (#4954) 2025-06-10 11:09:37 +03:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py chore: Refactor apply_rope. (#4918) 2025-06-09 16:51:59 +08:00
decoder_layer.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
embedding.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
gated_mlp.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
linear.py [Architecture] Refactor FusedMoE (#4790) 2025-06-03 14:02:19 +08:00
logits_processor.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
mlp.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
multi_stream_utils.py [chore] Make llama4 MoE use maybe_execute_in_parallel (#3779) 2025-04-28 10:58:03 -04:00
rms_norm.py feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00
rotary_embedding.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00