TensorRT-LLMs/tensorrt_llm/_torch/speculative
Simeng Liu 8cf3faa26a
[feat] Auto-enable ngram with concurrency <= 32. (#6232)
Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Signed-off-by: Mike Iovine <mike.iovine7@gmail.com>
Co-authored-by: Mike Iovine <miovine@nvidia.com>
Co-authored-by: Mike Iovine <mike.iovine7@gmail.com>
2025-07-31 18:45:51 -04:00
..
__init__.py [refactor] Simplification of Speculative decoding configs - Part 2 (#5936) 2025-07-23 09:20:27 +08:00
drafter.py [TRTLLM-6392][feat] Support turning on/off spec decoding dynamically (#6363) 2025-07-31 15:31:39 -04:00
eagle3.py [https://nvbugs/5355316] fix: update torch.compile option to fix triton store_cubin error (#5865) 2025-07-14 17:17:30 +08:00
interface.py [feat] Auto-enable ngram with concurrency <= 32. (#6232) 2025-07-31 18:45:51 -04:00
model_drafter.py [TRTLLM-6392][feat] Support turning on/off spec decoding dynamically (#6363) 2025-07-31 15:31:39 -04:00
mtp.py fix: remove cudaStreamSynchronize when using relaxed acceptance (#5262) 2025-07-28 09:18:41 +08:00
ngram.py [feat] Auto-enable ngram with concurrency <= 32. (#6232) 2025-07-31 18:45:51 -04:00
utils.py Mtp optimizations round1 (#5689) 2025-07-25 13:48:27 -04:00