TensorRT-LLMs/tensorrt_llm/_torch/modules
shuyixiong fd2af8d58a
[TRTLLM-9771][feat] Support partial update weight for fp8 (#10456)
Signed-off-by: Shuyi Xiong <219646547+shuyixiong@users.noreply.github.com>
Signed-off-by: shuyixiong <219646547+shuyixiong@users.noreply.github.com>
2026-01-22 14:46:05 +08:00
..
fla [TRTLLM-9432][feat] Reduce synchronization and recompilation for qwen3-next (#9691) 2025-12-23 10:14:29 +08:00
fused_moe [TRTLLM-9771][feat] Support partial update weight for fp8 (#10456) 2026-01-22 14:46:05 +08:00
mamba [TRTLLM-10060][feat] Enable attention dp for Nemotron Super v3. (#10347) 2026-01-13 17:13:55 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py [None][chore] Revert NVIDIA/TensorRT-LLM#10847 (#10869) 2026-01-21 11:08:40 +08:00
decoder_layer.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
embedding.py [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (#7838) 2025-12-04 13:32:11 +08:00
gated_mlp.py [None][feat] spark cublas LUT table for llama-8b-bf16 perf (#9811) 2025-12-12 22:37:56 -05:00
layer_norm.py [TRTLLM-9259][perf] Use torch.compile to fuse copy + layernorm within the LayerNorm module (#9052) 2025-11-11 18:11:00 -08:00
linear.py [TRTLLM-9771][feat] Support partial update weight for fp8 (#10456) 2026-01-22 14:46:05 +08:00
logits_processor.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
mlp.py [None][fix] Enable AttentionDP on Qwen3-VL and fix test (#10435) 2026-01-10 00:13:26 +09:00
multi_stream_utils.py [None][refactor] Refactor Torch Compile Backend, MoeLoadBalancer and warmup Logic (#6615) 2025-08-19 09:58:44 +08:00
qk_norm_attention.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
rms_norm.py [None][feat] Adding torch ext API for FusedAddRMSNormQuant kernel (#9905) 2026-01-15 07:29:15 +08:00
rotary_embedding.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
swiglu.py [None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021) 2025-08-27 13:02:10 +08:00
triton_linear.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00