TensorRT-LLMs/tensorrt_llm/_torch/speculative
Stefan Niebler 7d31532850
[TRTLLM-10312][perf] Improve performance of _write_finish_reasons in TorchSampler (#10459)
Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
2026-01-29 11:06:09 -05:00
..
__init__.py [None][feat] Implement sampling for MTP 1-model (#10019) 2025-12-31 13:48:34 -05:00
auto_heuristic.py
drafter.py
drafting_loops.py [https://nvbugs/5772414][fix] Fix draft token tree depth=1 corner case (#10385) 2026-01-05 17:20:14 +01:00
eagle3.py [TRTLLM-10325][feat] Refactor speculative decoding workers (#10768) 2026-01-21 13:05:29 -05:00
interface.py [TRTLLM-10276][feat] Integrate cutedsl argmax kernel (#10476) 2026-01-26 22:08:47 -05:00
model_drafter.py [TRTLLM-9962][feat] Some optimizations for two-model spec dec (#10208) 2025-12-28 12:52:04 +08:00
mtp.py [TRTLLM-10312][perf] Improve performance of _write_finish_reasons in TorchSampler (#10459) 2026-01-29 11:06:09 -05:00
ngram.py
one_model_sampler.py [None][feat] Speculative One Model: FlashInfer sampling (#10284) 2026-01-20 12:56:43 -05:00
save_hidden_state.py
spec_tree_manager.py
speculation_gate.py
utils.py