TensorRT-LLMs/tensorrt_llm/_torch/speculative
Mike Iovine 8b216135f0
[None][refactor] Move draft token padding out of Drafter (#7134)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-08-27 11:07:50 +02:00
..
__init__.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [None][feat] Optimize CUDA graph memory usage for spec decode cases (#6718) 2025-08-08 13:56:53 -04:00
eagle3.py [None][feat] Deepseek: Start Eagle work (#6210) 2025-08-22 12:57:17 -04:00
interface.py [None][feat] Deepseek: Start Eagle work (#6210) 2025-08-22 12:57:17 -04:00
model_drafter.py [None][refactor] Move draft token padding out of Drafter (#7134) 2025-08-27 11:07:50 +02:00
mtp.py [TRTLLM-7155][feat] Unify sampler handle logits implementation. (#6867) 2025-08-22 08:09:30 +02:00
ngram.py [None][refactor] Move draft token padding out of Drafter (#7134) 2025-08-27 11:07:50 +02:00
utils.py [None][feat] Deepseek: Start Eagle work (#6210) 2025-08-22 12:57:17 -04:00