TensorRT-LLMs/tensorrt_llm/_torch/speculative
Mike Iovine d9aef94431 [https://nvbugs/5814914][fix] Fix llama sm120 spec dec (#10765)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-02-02 16:26:46 +08:00
..
__init__.py [None][feat] Implement sampling for MTP 1-model (#10019) 2025-12-31 13:48:34 -05:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [TRTLLM-8136][feat] Dynamic draft length in spec decode (stage 1). (#8194) 2025-11-18 11:13:39 -05:00
drafting_loops.py [https://nvbugs/5772414][fix] Fix draft token tree depth=1 corner case (#10385) 2026-01-05 17:20:14 +01:00
eagle3.py [TRTLLM-10325][feat] Refactor speculative decoding workers (#10768) 2026-01-21 13:05:29 -05:00
interface.py [https://nvbugs/5814914][fix] Fix llama sm120 spec dec (#10765) 2026-02-02 16:26:46 +08:00
model_drafter.py [TRTLLM-9962][feat] Some optimizations for two-model spec dec (#10208) 2025-12-28 12:52:04 +08:00
mtp.py [TRTLLM-10312][perf] Improve performance of _write_finish_reasons in TorchSampler (#10459) 2026-01-29 11:06:09 -05:00
ngram.py [TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586) 2025-11-25 09:40:55 -05:00
one_model_sampler.py [None][feat] Speculative One Model: FlashInfer sampling (#10284) 2026-01-20 12:56:43 -05:00
save_hidden_state.py [TRTLLM-8160][feat] Add max_total_draft_tokens (#8366) 2025-10-21 11:11:04 -04:00
spec_tree_manager.py [TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586) 2025-11-25 09:40:55 -05:00
speculation_gate.py [TRTLLM-7412][feat] Turn off spec decode when the rolling average acceptance length drops below threshold. (#7283) 2025-10-13 15:51:14 -07:00
utils.py [TRTLLM-9416][feat] Skip DS-v3.2 indexer MQA and Top-K for short sequences. (#9524) 2025-12-15 12:42:25 +08:00