..
auto_deploy
fix: build_config in TorchLlmArgs and avoid arbitrary args ( #4972 )
2025-06-15 17:51:56 -07:00
compilation
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support ( #3804 )
2025-05-09 11:04:01 +08:00
modeling
[fix][test] Speedup Nemotron NAS unittests ( #5202 )
2025-06-15 11:26:03 +03:00
modules
feat: large-scale EP(part 7: DeepEP integration) ( #4792 )
2025-06-14 19:12:38 +08:00
multi_gpu
Use backend to replace macro to control enablement of MNNVL all reduce ( #4635 )
2025-06-12 11:22:49 +08:00
multi_gpu_modeling
[fix] Fix llama 4 long context ( #4809 )
2025-06-04 07:48:08 +08:00
speculative
Speculation: Draft Target in new FW ( #4558 )
2025-06-17 02:26:08 +08:00
thop
[TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner ( #5207 )
2025-06-17 21:01:56 +08:00
helpers.py
Update TensorRT-LLM ( #2936 )
2025-03-18 21:25:19 +08:00
pattern_watcher.py
[TRTLLM-3105][feat] Add Piecewise CUDA Graph Support ( #3804 )
2025-05-09 11:04:01 +08:00
test_attention_mla.py
fix mla test ( #5240 )
2025-06-17 15:26:25 +08:00
test_attention_no_cache.py
refactor(test): remove random context sequence lengths and set seed for reproducibility in attention tests ( #3919 )
2025-04-29 10:08:04 +08:00
test_attention.py
reduce num layers in attention test ( #3509 )
2025-04-14 12:43:59 +08:00
test_autotuner.py
feat: Enhance AutoTuner inference path and code readability ( #4466 )
2025-06-04 10:53:11 +08:00
test_flashinfer_attention.py
Add thread leak check and fix thread/memory leak issues. ( #3270 )
2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py
Add thread leak check and fix thread/memory leak issues. ( #3270 )
2025-04-08 19:03:18 +08:00
test_fp8_per_tensor_scale_tllmg_gemm.py
ci: [nvbugs/5280806] Unwaive unittests/_torch. ( #4951 )
2025-06-09 19:04:11 +08:00
test_group_rmn_norm.py
feat: Add heuristic for GroupRMSNorm kernel selection. ( #4047 )
2025-05-13 08:52:53 +08:00
test_mnnvl_memory.py
feat: Add MNNVL MoE A2A support ( #3504 )
2025-04-25 17:29:08 +08:00
test_overlap_scheduler_input.json
refactor: Unify request order in TRT and PyTorch workflow ( #4096 )
2025-05-20 18:49:27 +02:00
test_overlap_scheduler.py
chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs ( #4603 )
2025-05-28 18:43:04 +08:00
test_pytorch_model_engine.py
[nvbug/5314469][feat] Include the executor's max batch size in CUDA g… ( #4843 )
2025-06-09 08:31:35 -04:00
test_resource_manager.py
ci: [nvbugs/5280806] Unwaive unittests/_torch. ( #4951 )
2025-06-09 19:04:11 +08:00
test_return_logits.py
fix: build_config in TorchLlmArgs and avoid arbitrary args ( #4972 )
2025-06-15 17:51:56 -07:00
test_trtllm_sampler.py
chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs ( #4603 )
2025-05-28 18:43:04 +08:00
test_vanilla_attention.py
Add thread leak check and fix thread/memory leak issues. ( #3270 )
2025-04-08 19:03:18 +08:00