TensorRT-LLMs/tensorrt_llm/_torch/modules
Aurelien Chartier 6da95f29a9
[None][feat] Add support for fused gate_up_proj scales for FP8 blockwise (#6496)
Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
2025-08-05 11:22:32 -07:00
..
fused_moe [None][feat] Add support for fused gate_up_proj scales for FP8 blockwise (#6496) 2025-08-05 11:22:32 -07:00
mamba [None][fix] Remove expand configuration from mamba2 mixer (#6521) 2025-08-05 04:18:25 -04:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py [https://nvbugs/5340941][https://nvbugs/5375785] - fix: Wrap attentio… (#6355) 2025-08-01 07:38:06 -04:00
decoder_layer.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
embedding.py [https://nvbugs/5355316] fix: update torch.compile option to fix triton store_cubin error (#5865) 2025-07-14 17:17:30 +08:00
gated_mlp.py [TRTLLM-6657][feat] Add LoRA support for Gemma3 (#6371) 2025-08-01 09:19:54 -04:00
linear.py [https://nvbugs/5340941][https://nvbugs/5375785] - fix: Wrap attentio… (#6355) 2025-08-01 07:38:06 -04:00
logits_processor.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
mlp.py feat: add LLmArgs option to force using dynamic quantization (#5346) 2025-07-01 12:16:09 -07:00
multi_stream_utils.py [chore] Make llama4 MoE use maybe_execute_in_parallel (#3779) 2025-04-28 10:58:03 -04:00
rms_norm.py feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00
rotary_embedding.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00