TensorRT-LLMs/tensorrt_llm/_torch/speculative
Netanel Haber 0fee8cd028
[TRTLLM-7153] [feat] Move stop_criteria to sample_async (#7041)
Signed-off-by: Netanel Haber <nhaber@nvidia.com>
2025-09-07 17:36:49 +03:00
..
__init__.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [None][feat] Optimize CUDA graph memory usage for spec decode cases (#6718) 2025-08-08 13:56:53 -04:00
drafting_loops.py [https://nvbugs/5502352][fix] Fix 2-model CDL path (#7543) 2025-09-06 23:53:27 -04:00
eagle3.py [TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481) 2025-09-04 23:30:14 +08:00
interface.py [TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948) 2025-09-03 15:16:11 -07:00
model_drafter.py [TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948) 2025-09-03 15:16:11 -07:00
mtp.py [TRTLLM-7153] [feat] Move stop_criteria to sample_async (#7041) 2025-09-07 17:36:49 +03:00
ngram.py [None][refactor] Move draft token padding out of Drafter (#7134) 2025-08-27 11:07:50 +02:00
utils.py [None][feat] MultiLayer Eagle (#7234) 2025-09-04 10:49:13 -04:00