TensorRT-LLMs/tests/unittest/_torch
QI JUN 12ffdcbf53
CI: waive test_ad_build_small_multi (#5071)
Signed-off-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-06-10 14:54:05 +08:00
..
auto_deploy CI: waive test_ad_build_small_multi (#5071) 2025-06-10 14:54:05 +08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
modeling chore: Refine weight prefetching. (#4893) 2025-06-09 21:24:16 +08:00
modules [https://nvbugs/5332927] Waive new tests (#5051) 2025-06-10 05:17:54 +08:00
multi_gpu [TRTLLM-4647][fix] Fix the no fusion allreduce hanging (#4594) 2025-06-04 18:26:13 -07:00
multi_gpu_modeling [fix] Fix llama 4 long context (#4809) 2025-06-04 07:48:08 +08:00
speculative ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
thop ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
helpers.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_attention_mla.py chore: Waive CI failure. (#5069) 2025-06-10 14:04:10 +08:00
test_attention_no_cache.py refactor(test): remove random context sequence lengths and set seed for reproducibility in attention tests (#3919) 2025-04-29 10:08:04 +08:00
test_attention.py reduce num layers in attention test (#3509) 2025-04-14 12:43:59 +08:00
test_autotuner.py feat: Enhance AutoTuner inference path and code readability (#4466) 2025-06-04 10:53:11 +08:00
test_flashinfer_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_fp8_per_tensor_scale_tllmg_gemm.py ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
test_group_rmn_norm.py feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
test_mnnvl_memory.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
test_overlap_scheduler_input.json refactor: Unify request order in TRT and PyTorch workflow (#4096) 2025-05-20 18:49:27 +02:00
test_overlap_scheduler.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
test_pytorch_model_engine.py [nvbug/5314469][feat] Include the executor's max batch size in CUDA g… (#4843) 2025-06-09 08:31:35 -04:00
test_resource_manager.py ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
test_return_logits.py ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
test_trtllm_sampler.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
test_vanilla_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00