| .. |
|
auto_deploy
|
[AutoDeploy] re-enable waive for flaky AD test (#5867)
|
2025-07-09 11:47:48 +09:00 |
|
compilation
|
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804)
|
2025-05-09 11:04:01 +08:00 |
|
debugger
|
Fix: fix nvbug 5356427 (#5464)
|
2025-06-25 22:24:26 +08:00 |
|
modeling
|
feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644)
|
2025-07-17 06:30:58 +08:00 |
|
modules
|
[ci] speedup fused moe tests (#5726)
|
2025-07-07 18:03:15 +03:00 |
|
multi_gpu
|
Waive L0 test (#6002)
|
2025-07-14 19:55:34 +09:00 |
|
multi_gpu_modeling
|
[TRTLLM-5530][BREAKING CHANGE] refactor: unify KvCacheConfig in LLM class for pytorch backend (#5752)
|
2025-07-16 16:42:59 +08:00 |
|
speculative
|
[https://nvbugs/5355316] fix: update torch.compile option to fix triton store_cubin error (#5865)
|
2025-07-14 17:17:30 +08:00 |
|
thop
|
[NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902)
|
2025-07-12 15:50:31 +09:00 |
|
helpers.py
|
Update TensorRT-LLM (#2936)
|
2025-03-18 21:25:19 +08:00 |
|
pattern_watcher.py
|
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804)
|
2025-05-09 11:04:01 +08:00 |
|
test_attention_mla.py
|
fix mla test (#5240)
|
2025-06-17 15:26:25 +08:00 |
|
test_attention_no_cache.py
|
refactor(test): remove random context sequence lengths and set seed for reproducibility in attention tests (#3919)
|
2025-04-29 10:08:04 +08:00 |
|
test_attention.py
|
reduce num layers in attention test (#3509)
|
2025-04-14 12:43:59 +08:00 |
|
test_autotuner.py
|
feat: Enhance AutoTuner inference path and code readability (#4466)
|
2025-06-04 10:53:11 +08:00 |
|
test_beam_search.py
|
Breaking change: perf: [TRTLLM-4662] Enable cuda graph by default (#5480)
|
2025-07-14 16:42:23 +08:00 |
|
test_flashinfer_attention.py
|
Add thread leak check and fix thread/memory leak issues. (#3270)
|
2025-04-08 19:03:18 +08:00 |
|
test_flashinfer_star_attn.py
|
Add thread leak check and fix thread/memory leak issues. (#3270)
|
2025-04-08 19:03:18 +08:00 |
|
test_fp8_per_tensor_scale_tllmg_gemm.py
|
ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951)
|
2025-06-09 19:04:11 +08:00 |
|
test_group_rmn_norm.py
|
feat: Add heuristic for GroupRMSNorm kernel selection. (#4047)
|
2025-05-13 08:52:53 +08:00 |
|
test_mnnvl_memory.py
|
feat: Add MNNVL MoE A2A support (#3504)
|
2025-04-25 17:29:08 +08:00 |
|
test_overlap_scheduler_input.json
|
refactor: Unify request order in TRT and PyTorch workflow (#4096)
|
2025-05-20 18:49:27 +02:00 |
|
test_overlap_scheduler.py
|
[ci] parallelize torch unittests (#5714)
|
2025-07-09 11:05:57 +03:00 |
|
test_pytorch_model_engine.py
|
[TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372)
|
2025-07-17 00:50:30 +08:00 |
|
test_resource_manager.py
|
fix: adjust window sizes of VSWA at torch backend (#5880)
|
2025-07-15 17:41:54 +08:00 |
|
test_return_logits.py
|
[TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312)
|
2025-06-20 03:01:10 +08:00 |
|
test_share_tensor.py
|
[1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396)
|
2025-07-10 05:12:53 +09:00 |
|
test_trtllm_sampler.py
|
[NvBug 5370718, 5371538] fix: Fix incremental detokenization (#5825)
|
2025-07-10 16:30:00 +08:00 |
|
test_vanilla_attention.py
|
Add thread leak check and fix thread/memory leak issues. (#3270)
|
2025-04-08 19:03:18 +08:00 |