TensorRT-LLMs/tensorrt_llm/_torch/speculative
Ziyi Xiong 9ecc6db5b4
[https://nvbugs/5537878][fix] Reserve an extra slot for padded batch … (#8231)
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
2025-10-13 23:34:22 -07:00
..
__init__.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
drafting_loops.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
eagle3.py [https://nvbugs/5537878][fix] Reserve an extra slot for padded batch … (#8231) 2025-10-13 23:34:22 -07:00
interface.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
model_drafter.py [None][fix] Fix chunked prefill state of draft request (#8067) 2025-09-30 09:51:21 +08:00
mtp.py [None][feat] Use list instead of torch tensor for new tokens in update requests (#7730) 2025-09-23 10:40:08 -04:00
ngram.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
utils.py [TRTLLM-6746][feat] Enable two-model spec dec for MTP Eagle (#7001) 2025-09-18 12:05:36 -04:00