TensorRT-LLMs/tensorrt_llm/_torch/speculative
2025-09-26 11:28:05 +08:00
..
__init__.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
drafting_loops.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
eagle3.py [TRTLLM-7728][feat] batched sampling by strategy (supersedes enable_mixed_sampler, cf. TRTLLM-7156) (#7294) 2025-09-23 16:05:05 -07:00
interface.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
model_drafter.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
mtp.py [None][feat] Use list instead of torch tensor for new tokens in update requests (#7730) 2025-09-23 10:40:08 -04:00
ngram.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
utils.py [TRTLLM-6746][feat] Enable two-model spec dec for MTP Eagle (#7001) 2025-09-18 12:05:36 -04:00