TensorRT-LLMs/tests/unittest/_torch
Venky 9258187e98
Waive some test_llama_eagle3 unittests (#5811)
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
2025-07-08 15:35:27 +09:00
..
auto_deploy [ci] speedup fused moe tests (#5726) 2025-07-07 18:03:15 +03:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
modeling fix: Investigate Gemma3 1B decoder output discrepancy (#5564) 2025-07-04 13:14:13 +08:00
modules [ci] speedup fused moe tests (#5726) 2025-07-07 18:03:15 +03:00
multi_gpu [ci] small multigpu speedups (#5643) 2025-07-03 08:06:10 -04:00
multi_gpu_modeling [Test] - Waive or fix few known test failures (#5769) 2025-07-06 21:14:16 +08:00
speculative Waive some test_llama_eagle3 unittests (#5811) 2025-07-08 15:35:27 +09:00
thop [TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615) 2025-07-07 18:04:57 +08:00
helpers.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_attention_mla.py fix mla test (#5240) 2025-06-17 15:26:25 +08:00
test_attention_no_cache.py refactor(test): remove random context sequence lengths and set seed for reproducibility in attention tests (#3919) 2025-04-29 10:08:04 +08:00
test_attention.py reduce num layers in attention test (#3509) 2025-04-14 12:43:59 +08:00
test_autotuner.py feat: Enhance AutoTuner inference path and code readability (#4466) 2025-06-04 10:53:11 +08:00
test_beam_search.py [TRTLLM-3442] feat: added beam search support to the PyTorch Workflow (#5333) 2025-07-05 01:35:13 +09:00
test_flashinfer_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_fp8_per_tensor_scale_tllmg_gemm.py ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
test_group_rmn_norm.py feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
test_mnnvl_memory.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
test_overlap_scheduler_input.json refactor: Unify request order in TRT and PyTorch workflow (#4096) 2025-05-20 18:49:27 +02:00
test_overlap_scheduler.py [TRTLLM-5530][BREAKING CHANGE]: enhance the llm args pytorch config part 1(cuda_graph_config) (#5014) 2025-06-30 11:05:40 +08:00
test_pytorch_model_engine.py [TRTLLM-6291] feat: Add user-provided speculative decoding support (#5204) 2025-07-07 16:30:43 +02:00
test_resource_manager.py ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
test_return_logits.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
test_trtllm_sampler.py perf: Use tokenizers API to optimize incremental detokenization perf (#5574) 2025-07-01 09:35:25 -04:00
test_vanilla_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00