TensorRT-LLMs/tensorrt_llm/_torch/speculative
Mike Iovine e968f98b43
[None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-08-07 12:51:47 -04:00
..
__init__.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
eagle3.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
interface.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
model_drafter.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
mtp.py [https://nvbugs/5252313][fix] Fix torch compile + MTP (#6554) 2025-08-05 10:31:29 -04:00
ngram.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
utils.py [TRTLLM-6409][feat] Enable guided decoding with speculative decoding (part 1: two-model engine) (#6300) 2025-08-07 05:53:48 -04:00