TensorRT-LLMs/tests/unittest/_torch
Yan Chunwei 0c26059703
chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732)
* beam_width and max_new_token

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* remove beam_width

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* remove min_length

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* remove return_num_sequences

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

---------

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-05-07 13:20:25 +08:00
..
auto_deploy [AutoDeploy][perf] Further optimize flashinfer backend in AutoDeploy (#4024) 2025-05-06 10:46:36 +08:00
compilation Update (#2978) 2025-03-23 16:39:35 +08:00
modeling feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
modules chore: reorganize some unit tests of PyTorch (#3780) 2025-04-23 11:19:10 -07:00
multi_gpu [TRTLLM-3925, https://nvbugs/5245262] [fix] Normalize LLM.generate API (#3985) 2025-05-07 11:06:23 +08:00
multi_gpu_modeling [infra] Improve llama4 parallelism test coverage (#3821) 2025-05-02 16:15:04 -04:00
speculative [fix] Fix flashinfer + speculation issues (#3686) 2025-04-28 14:34:22 -04:00
thop TRTLLM-4624 feat: Add nvfp4 gemm and moe support for SM120 (#3770) 2025-04-29 11:19:11 -04:00
helpers.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
pattern_watcher.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_attention_mla.py [fix] Loosen the thresholds of test_attention_mla (#4074) 2025-05-06 11:31:09 +08:00
test_attention_no_cache.py refactor(test): remove random context sequence lengths and set seed for reproducibility in attention tests (#3919) 2025-04-29 10:08:04 +08:00
test_attention.py reduce num layers in attention test (#3509) 2025-04-14 12:43:59 +08:00
test_autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
test_flashinfer_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_group_rmn_norm.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
test_mnnvl_memory.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
test_overlap_scheduler_input.json fix: Fix C++ decoder synchronization in PyTorch (#3106) 2025-04-23 23:55:27 +08:00
test_overlap_scheduler.py chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732) 2025-05-07 13:20:25 +08:00
test_pytorch_model_engine.py chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
test_resource_manager.py fix: Fix C++ decoder synchronization in PyTorch (#3106) 2025-04-23 23:55:27 +08:00
test_return_logits.py cleanup logprob params (#4039) 2025-05-07 00:50:16 +08:00
test_trtllm_decoder.py chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732) 2025-05-07 13:20:25 +08:00
test_vanilla_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00