..
attention
[TRTLLM-9798][feat] Change to use new DeepGEMM MQA sm100 kernel for MTP-3 ( #10226 )
2025-12-24 14:39:12 +08:00
auto_deploy
[ https://nvbugs/5747878 ][fix] unwaive llama4 scout tests ( #10468 )
2026-01-07 23:33:45 -05:00
compilation
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support ( #3804 )
2025-05-09 11:04:01 +08:00
debugger
Fix: fix nvbug 5356427 ( #5464 )
2025-06-25 22:24:26 +08:00
distributed
[TRTLLM-9467][fix] Fix PP+CP combination with helix parallelism ( #10312 )
2026-01-01 13:42:53 -05:00
executor
[ https://nvbugs/5717993 ][fix] Add execution_stream across PyExecutor, KVCacheManager, PeftCacheManager to ensure proper CUDA stream synchronization between KV cache transfer operations and model forward kernels. ( #10060 )
2025-12-31 09:22:54 -08:00
misc
[None][perf] TRTLLM MoE maps to lower tuning buckets when ep>1 ( #9998 )
2026-01-05 17:16:12 +01:00
modeling
[None][feat] support Qwen3-VL dense model in pytorch backend ( #9060 )
2025-12-31 17:54:26 +09:00
models/checkpoints /hf
[TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders ( #9583 )
2025-12-05 16:07:20 +01:00
modules
[ https://nvbugs/5784543 ][fix] Setup dist before using autotuner. ( #10491 )
2026-01-08 10:32:50 +08:00
multi_gpu
[TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. ( #8531 )
2026-01-05 15:44:37 +08:00
multi_gpu_modeling
[ https://nvbugs/5515753 ][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. ( #8440 )
2025-11-20 12:43:13 -05:00
multimodal
[None][feat] EPD for Qwen3 VL ( #10470 )
2026-01-08 06:45:54 -05:00
ray_orchestrator
[TRTLLM-9467][fix] Fix PP+CP combination with helix parallelism ( #10312 )
2026-01-01 13:42:53 -05:00
sampler
[None][fix] avoid implicit cudaStreamSynchronize in sample_async. ( #10120 )
2025-12-23 10:15:40 +08:00
speculative
[ https://nvbugs/5749988 ][fix] Remove redundant qwen3 spec dec test ( #10387 )
2026-01-06 11:46:34 -05:00
thop
[None][feat] CuteDSL MOE FC1 Enhancement ( #10088 )
2026-01-06 09:30:43 +08:00
helpers.py
[ #8733 ][feat] Add Llama4 MoE handling to AutoDeploy ( #9556 )
2025-12-04 08:03:33 +02:00
pattern_watcher.py
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support ( #3804 )
2025-05-09 11:04:01 +08:00
test_connector.py
[None][feat] KV Cache Connector API ( #7228 )
2025-08-28 23:09:27 -04:00
test_model_config.py
[TRTLLM-10171][fix] Correct attention handling in ModelConfig and KVCacheManager ( #10330 )
2026-01-04 06:07:30 -05:00