TensorRT-LLMs/tests/unittest/_torch
yunruis b99c5ce8c1
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
Signed-off-by: yunruis <yunruis@nvidia.com>
Signed-off-by: kduan <176893526+Kefeng-Duan@users.noreply.github.com>
Signed-off-by: Kefeng-Duan <176893526+Kefeng-Duan@users.noreply.github.com>
Co-authored-by: kduan <176893526+Kefeng-Duan@users.noreply.github.com>
2025-06-14 17:36:22 +08:00
..
auto_deploy [nvbugs/5331013] fix AutoDeploy for PyTorch 25.05 dependency upgrade (#5106) 2025-06-12 13:07:27 +08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
modeling [test] Use LLM API for Nemotron-H correctness test (#5097) 2025-06-12 09:54:46 +03:00
modules fix: [nvbugs/5324229] Fix broken WInt4AFP8FusedMoEMethod since FusedMoE refactor. (#4930) 2025-06-13 10:21:32 +08:00
multi_gpu Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
multi_gpu_modeling [fix] Fix llama 4 long context (#4809) 2025-06-04 07:48:08 +08:00
speculative [nvbug/5319281][fix] Stop drafting when we hit the draft model's max seq len (#4879) 2025-06-13 11:06:36 -04:00
thop Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
helpers.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_attention_mla.py [fix] Fix test_attention_mla (#5084) 2025-06-10 14:20:11 -07:00
test_attention_no_cache.py refactor(test): remove random context sequence lengths and set seed for reproducibility in attention tests (#3919) 2025-04-29 10:08:04 +08:00
test_attention.py reduce num layers in attention test (#3509) 2025-04-14 12:43:59 +08:00
test_autotuner.py feat: Enhance AutoTuner inference path and code readability (#4466) 2025-06-04 10:53:11 +08:00
test_flashinfer_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_fp8_per_tensor_scale_tllmg_gemm.py ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
test_group_rmn_norm.py feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
test_mnnvl_memory.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
test_overlap_scheduler_input.json refactor: Unify request order in TRT and PyTorch workflow (#4096) 2025-05-20 18:49:27 +02:00
test_overlap_scheduler.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
test_pytorch_model_engine.py [nvbug/5314469][feat] Include the executor's max batch size in CUDA g… (#4843) 2025-06-09 08:31:35 -04:00
test_resource_manager.py ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
test_return_logits.py [fix] Reenable test return logits (#5160) 2025-06-13 06:07:22 +02:00
test_trtllm_sampler.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
test_vanilla_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00