TensorRT-LLMs/tensorrt_llm/_torch/speculative
Jin Li 4bac6b337e
[https://nvbugs/5537348][fix] Use device tensor index for MTP (#8062)
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-10-14 05:51:45 -07:00
..
__init__.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
drafting_loops.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
eagle3.py [https://nvbugs/5537878][fix] Reserve an extra slot for padded batch … (#8231) 2025-10-13 23:34:22 -07:00
interface.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
model_drafter.py [None][fix] Fix chunked prefill state of draft request (#8067) 2025-09-30 09:51:21 +08:00
mtp.py [https://nvbugs/5537348][fix] Use device tensor index for MTP (#8062) 2025-10-14 05:51:45 -07:00
ngram.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
utils.py [TRTLLM-6746][feat] Enable two-model spec dec for MTP Eagle (#7001) 2025-09-18 12:05:36 -04:00