| .. |
|
attention
|
[None][chore] Waive star attention unittests (#10439)
|
2026-01-16 10:12:32 +08:00 |
|
auto_deploy
|
[None][feat] AutoDeploy: Flashinfer kernels bringup (#10867)
|
2026-01-29 14:59:29 -08:00 |
|
compilation
|
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804)
|
2025-05-09 11:04:01 +08:00 |
|
debugger
|
Fix: fix nvbug 5356427 (#5464)
|
2025-06-25 22:24:26 +08:00 |
|
distributed
|
[TRTLLM-9467][fix] Fix PP+CP combination with helix parallelism (#10312)
|
2026-01-01 13:42:53 -05:00 |
|
executor
|
[None][chore] Async Transfer Manager (#9891)
|
2026-01-20 12:12:47 -05:00 |
|
misc
|
[TRTLLM-10308][feat] AutoTuner Cache: reorganize cache file for distributed tuning (#10956)
|
2026-01-27 16:39:40 +08:00 |
|
modeling
|
[https://nvbugs/5761391][fix] Include triton-kernels as a packaged dependency (#10471)
|
2026-01-28 19:56:32 -08:00 |
|
models/checkpoints/hf
|
[TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (#9583)
|
2025-12-05 16:07:20 +01:00 |
|
modules
|
[https://nvbugs/5761391][fix] Include triton-kernels as a packaged dependency (#10471)
|
2026-01-28 19:56:32 -08:00 |
|
multi_gpu
|
[TRTLLM-10048][feat] Fuse the AllGather for expert statistics required by the EPLB. (#10885)
|
2026-01-26 17:59:03 +08:00 |
|
multi_gpu_modeling
|
[https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440)
|
2025-11-20 12:43:13 -05:00 |
|
multimodal
|
[TRTLLM-9522][test] cover LLM API multi_modal_embeddings (#9963)
|
2026-01-12 11:38:22 +01:00 |
|
ray_orchestrator
|
[TRTLLM-9771][feat] Support partial update weight for fp8 (#10456)
|
2026-01-22 14:46:05 +08:00 |
|
sampler
|
[TRTLLM-10312][perf] Improve performance of _write_finish_reasons in TorchSampler (#10459)
|
2026-01-29 11:06:09 -05:00 |
|
speculative
|
[TRTC-122][feat] Eagle3 Specdec UX improvements (#10124)
|
2026-01-22 07:24:11 -08:00 |
|
thop
|
[TRTLLM-9390][chore] Add Fake OPs for One-Sided AlltoAll. (#11002)
|
2026-01-27 15:55:07 +08:00 |
|
helpers.py
|
[#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556)
|
2025-12-04 08:03:33 +02:00 |
|
pattern_watcher.py
|
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804)
|
2025-05-09 11:04:01 +08:00 |
|
test_connector.py
|
[None][feat] KV Cache Connector API (#7228)
|
2025-08-28 23:09:27 -04:00 |
|
test_model_config.py
|
[TRTLLM-10171][fix] Correct attention handling in ModelConfig and KVCacheManager (#10330)
|
2026-01-04 06:07:30 -05:00 |