| .. |
|
fla
|
[TRTLLM-9432][feat] Reduce synchronization and recompilation for qwen3-next (#9691)
|
2025-12-23 10:14:29 +08:00 |
|
fused_moe
|
[TRTLLM-9108][feat] Add test configurable moe module multi gpu (#10699)
|
2026-01-23 10:16:58 +08:00 |
|
mamba
|
[TRTLLM-10060][feat] Enable attention dp for Nemotron Super v3. (#10347)
|
2026-01-13 17:13:55 +08:00 |
|
__init__.py
|
|
|
|
attention.py
|
[https://nvbugs/5322131][feat] Multi-LoRA serving with CUDA Graph (#8279)
|
2026-01-22 14:01:18 +01:00 |
|
decoder_layer.py
|
chore: Change the type annotations of input_ids and position_ids to int32. (#4632)
|
2025-06-07 16:10:47 +08:00 |
|
embedding.py
|
[TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (#7838)
|
2025-12-04 13:32:11 +08:00 |
|
gated_mlp.py
|
[None][feat] spark cublas LUT table for llama-8b-bf16 perf (#9811)
|
2025-12-12 22:37:56 -05:00 |
|
layer_norm.py
|
[TRTLLM-9259][perf] Use torch.compile to fuse copy + layernorm within the LayerNorm module (#9052)
|
2025-11-11 18:11:00 -08:00 |
|
linear.py
|
[TRTLLM-9771][feat] Support partial update weight for fp8 (#10456)
|
2026-01-22 14:46:05 +08:00 |
|
logits_processor.py
|
|
|
|
mlp.py
|
[None][fix] Enable AttentionDP on Qwen3-VL and fix test (#10435)
|
2026-01-10 00:13:26 +09:00 |
|
multi_stream_utils.py
|
[None][refactor] Refactor Torch Compile Backend, MoeLoadBalancer and warmup Logic (#6615)
|
2025-08-19 09:58:44 +08:00 |
|
qk_norm_attention.py
|
[TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689)
|
2025-12-15 20:05:20 -08:00 |
|
rms_norm.py
|
[None][fix] Update RMSNorm custom op plumbing (#10843)
|
2026-01-22 21:03:22 +08:00 |
|
rotary_embedding.py
|
[TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689)
|
2025-12-15 20:05:20 -08:00 |
|
swiglu.py
|
[None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021)
|
2025-08-27 13:02:10 +08:00 |
|
triton_linear.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |