TensorRT-LLMs/tensorrt_llm/_torch/modules
Balaram Buddharaju 8c1cfc872b
[TRTLLM-9493][feat] Custom AllToAll for helix parallelism (#9986)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-12-23 18:14:30 -08:00
..
fla [TRTLLM-9432][feat] Reduce synchronization and recompilation for qwen3-next (#9691) 2025-12-23 10:14:29 +08:00
fused_moe [https://nvbugs/5747674][fix] Add contiguous() before view() in load_expert_w3_w1_weight and load (#10136) 2025-12-22 21:03:34 -05:00
mamba [TRTLLM-9432][feat] Reduce synchronization and recompilation for qwen3-next (#9691) 2025-12-23 10:14:29 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py [TRTLLM-9493][feat] Custom AllToAll for helix parallelism (#9986) 2025-12-23 18:14:30 -08:00
decoder_layer.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
embedding.py [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (#7838) 2025-12-04 13:32:11 +08:00
gated_mlp.py [None][feat] spark cublas LUT table for llama-8b-bf16 perf (#9811) 2025-12-12 22:37:56 -05:00
layer_norm.py [TRTLLM-9259][perf] Use torch.compile to fuse copy + layernorm within the LayerNorm module (#9052) 2025-11-11 18:11:00 -08:00
linear.py [None][fix] NVFP4 linear method's weight and weight_scale padding (#10148) 2025-12-22 15:00:31 +08:00
logits_processor.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
mlp.py feat: add LLmArgs option to force using dynamic quantization (#5346) 2025-07-01 12:16:09 -07:00
multi_stream_utils.py [None][refactor] Refactor Torch Compile Backend, MoeLoadBalancer and warmup Logic (#6615) 2025-08-19 09:58:44 +08:00
qk_norm_attention.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
rms_norm.py [None][chore] Refine qwen3-next implementation. (#8064) 2025-09-30 15:05:13 -04:00
rotary_embedding.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
swiglu.py [None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021) 2025-08-27 13:02:10 +08:00
triton_linear.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00