TensorRT-LLMs/tensorrt_llm/_torch/speculative
Ziyi Xiong 81222c3670
[None] Fix warning when capturing CUDA graph (#9746)
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
2025-12-10 19:22:38 -08:00
..
__init__.py [None][feat] Draft: Save state first pass (#7012) 2025-10-01 18:40:55 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [TRTLLM-8136][feat] Dynamic draft length in spec decode (stage 1). (#8194) 2025-11-18 11:13:39 -05:00
drafting_loops.py [None] Fix warning when capturing CUDA graph (#9746) 2025-12-10 19:22:38 -08:00
eagle3.py [None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (#9608) 2025-12-04 08:23:57 -08:00
interface.py [None][feat] Make 2-model spec dec use the 1-model kernels (Hopper) (#8810) 2025-12-09 11:06:31 -05:00
model_drafter.py [TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586) 2025-11-25 09:40:55 -05:00
mtp.py [None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (#9608) 2025-12-04 08:23:57 -08:00
ngram.py [TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586) 2025-11-25 09:40:55 -05:00
save_hidden_state.py [TRTLLM-8160][feat] Add max_total_draft_tokens (#8366) 2025-10-21 11:11:04 -04:00
spec_tree_manager.py [TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586) 2025-11-25 09:40:55 -05:00
speculation_gate.py [TRTLLM-7412][feat] Turn off spec decode when the rolling average acceptance length drops below threshold. (#7283) 2025-10-13 15:51:14 -07:00
utils.py [TRTLLM-7954][feat] Target model KV cache rellocation (#8421) 2025-10-23 09:36:50 +08:00