| .. |
|
fla
|
[None][feat] Support Qwen3 next (#7892)
|
2025-09-29 21:16:07 +08:00 |
|
fused_moe
|
[None][fix] only support deepep post quant all2all on nvfp4 (#8041)
|
2025-09-29 14:37:50 +08:00 |
|
mamba
|
[None][fix] enable NvFP4/FP8 quantization for Nemotron-H architecture (#7589)
|
2025-09-09 11:42:22 +03:00 |
|
__init__.py
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
attention.py
|
[None][feat] Support Qwen3 next (#7892)
|
2025-09-29 21:16:07 +08:00 |
|
decoder_layer.py
|
chore: Change the type annotations of input_ids and position_ids to int32. (#4632)
|
2025-06-07 16:10:47 +08:00 |
|
embedding.py
|
[TRTLLM-6741] [feat] enable LM tp for MTP, under attention dp case (cherry-pick #7128) (#7571)
|
2025-09-17 09:41:32 +08:00 |
|
gated_mlp.py
|
[https://nvbugs/5505402] [fix] Disable deep_gemm for Qwen3 QKNormRoPEAttention and Linear layers due to accuracy issues (#7616)
|
2025-09-10 18:30:48 +01:00 |
|
layer_norm.py
|
[#6187][feat] add LayerNorm module (#6625)
|
2025-08-12 21:43:30 +02:00 |
|
linear.py
|
[None][fix] Fix dummy load format for DeepSeek. (#7874)
|
2025-09-24 23:03:16 +08:00 |
|
logits_processor.py
|
feat: LogitsProcessor in PyTorch backend (#3145)
|
2025-05-01 14:15:30 -07:00 |
|
mlp.py
|
feat: add LLmArgs option to force using dynamic quantization (#5346)
|
2025-07-01 12:16:09 -07:00 |
|
multi_stream_utils.py
|
[None][refactor] Refactor Torch Compile Backend, MoeLoadBalancer and warmup Logic (#6615)
|
2025-08-19 09:58:44 +08:00 |
|
qk_norm_attention.py
|
[None][feat] Support Qwen3 next (#7892)
|
2025-09-29 21:16:07 +08:00 |
|
rms_norm.py
|
[None][feat] Support Qwen3 next (#7892)
|
2025-09-29 21:16:07 +08:00 |
|
rotary_embedding.py
|
[TRTLLM-7385][feat] Optimize Qwen2/2.5-VL performance (#7250)
|
2025-09-22 03:40:02 -07:00 |
|
swiglu.py
|
[None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021)
|
2025-08-27 13:02:10 +08:00 |
|
triton_linear.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |