TensorRT-LLMs/tensorrt_llm/_torch/speculative
Chang Liu e47c787dd7
[TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
2025-10-24 13:40:41 -04:00
..
__init__.py [None][feat] Draft: Save state first pass (#7012) 2025-10-01 18:40:55 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [TRTLLM-8160][feat] Add max_total_draft_tokens (#8366) 2025-10-21 11:11:04 -04:00
drafting_loops.py [TRTLLM-8160][feat] Add max_total_draft_tokens (#8366) 2025-10-21 11:11:04 -04:00
eagle3.py [None][fix] Fix MTP 2-model (#8115) 2025-10-03 10:13:50 -07:00
interface.py [TRTLLM-8160][feat] Add max_total_draft_tokens (#8366) 2025-10-21 11:11:04 -04:00
model_drafter.py [https://nvbugs/5556020][fix] test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_eagle3 dimension mismatch (#8517) 2025-10-22 09:58:22 +08:00
mtp.py [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
ngram.py [TRTLLM-8160][feat] Add max_total_draft_tokens (#8366) 2025-10-21 11:11:04 -04:00
save_hidden_state.py [TRTLLM-8160][feat] Add max_total_draft_tokens (#8366) 2025-10-21 11:11:04 -04:00
spec_tree_manager.py [TRTLLM-6393][feat] add static tree sampling and verification (#7161) 2025-09-26 13:16:16 -04:00
speculation_gate.py [TRTLLM-7412][feat] Turn off spec decode when the rolling average acceptance length drops below threshold. (#7283) 2025-10-13 15:51:14 -07:00
utils.py [TRTLLM-7954][feat] Target model KV cache rellocation (#8421) 2025-10-23 09:36:50 +08:00