TensorRT-LLMs/tests/unittest/_torch/sampler
Stefan Niebler d50010cd1f [https://nvbugs/5769815][fix] Fix offset calculation in _are_stop_words when using speculative decoding (#10854)
Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-02-09 23:53:40 +08:00
..
test_beam_search_util.py [TRTLLM-6756][feat] Add Beam Search to TorchSampler (#8509) 2025-12-01 18:48:04 +01:00
test_beam_search.py [TRTLLM-10030][perf] avoid syncs in beam search + other improvements (#11349) 2026-02-09 16:13:58 +01:00
test_best_of_n.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
test_logits_logprobs.py [TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler (#9675) 2026-01-16 10:52:41 -08:00
test_torch_multi_arange.py [TRTLLM-8832][feat] fully async _select_generated_logits with tests (#8628) 2025-10-27 16:15:32 +01:00
test_torch_sampler.py [https://nvbugs/5769815][fix] Fix offset calculation in _are_stop_words when using speculative decoding (#10854) 2026-02-09 23:53:40 +08:00
test_trtllm_sampler.py [https://nvbugs/5708810][fix] Fix TRTLLMSampler (#9710) 2025-12-15 23:26:52 +01:00