TensorRT-LLMs/tests/unittest/_torch/speculative
Ziyi Xiong 7c4344b92e
[https://nvbugs/5590408][fix] Exclude num of draft tokens from mMaxSeqLenKv (#9210)
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
2025-11-18 15:41:56 -05:00
..
test_draft_len_schedule.py [TRTLLM-8136][feat] Dynamic draft length in spec decode (stage 1). (#8194) 2025-11-18 11:13:39 -05:00
test_draft_target.py [TRTLLM-7457][ci] Update & cleanup unittest parallel config (#7254) 2025-08-27 00:45:58 -04:00
test_draft_token_tree_sampling.py [TRTLLM-6393][feat] add static tree sampling and verification (#7161) 2025-09-26 13:16:16 -04:00
test_draft_token_tree_verification.py [https://nvbugs/5508536][fix] Take Over (#8627): Reintroduce: Move stop_criteria to sample_async (#7041) (#8794) 2025-11-07 09:01:15 +01:00
test_dynamic_spec_decode.py [TRTLLM-8136][feat] Dynamic draft length in spec decode (stage 1). (#8194) 2025-11-18 11:13:39 -05:00
test_eagle3.py [https://nvbugs/5590408][fix] Exclude num of draft tokens from mMaxSeqLenKv (#9210) 2025-11-18 15:41:56 -05:00
test_kv_cache_reuse.py [TRTLLM-7457][ci] Update & cleanup unittest parallel config (#7254) 2025-08-27 00:45:58 -04:00
test_mtp.py [TRTLLM-8160][feat] Add max_total_draft_tokens (#8366) 2025-10-21 11:11:04 -04:00
test_ngram.py [TRTLLM-7457][ci] Update & cleanup unittest parallel config (#7254) 2025-08-27 00:45:58 -04:00
test_save_state.py [None][feat] Draft: Save state first pass (#7012) 2025-10-01 18:40:55 -04:00
test_spec_gate.py [TRTLLM-7412][feat] Turn off spec decode when the rolling average acceptance length drops below threshold. (#7283) 2025-10-13 15:51:14 -07:00
test_torch_rejection_sampling.py [None][fix] restore list[list[list[int]]] in add_token (#8502) 2025-10-20 22:34:57 -04:00
test_user_provided.py [TRTLLM-7457][ci] Update & cleanup unittest parallel config (#7254) 2025-08-27 00:45:58 -04:00