TensorRT-LLMs/tensorrt_llm/_torch/speculative
Kaiyu Xie 62042a9733
[TRTLLM-6741] [feat] enable LM tp for MTP, under attention dp case (cherry-pick #7128) (#7571)
Signed-off-by: Cheng Hang <chang@nvidia.com>
Co-authored-by: Cheng Hang <chang@nvidia.com>
2025-09-17 09:41:32 +08:00
..
__init__.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
drafting_loops.py [https://nvbugs/5502352][fix] Fix 2-model CDL path (#7543) 2025-09-06 23:53:27 -04:00
eagle3.py [None][feat] Eagle, use last hidden post norm (#7546) 2025-09-15 12:23:57 -04:00
interface.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
model_drafter.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
mtp.py [TRTLLM-6741] [feat] enable LM tp for MTP, under attention dp case (cherry-pick #7128) (#7571) 2025-09-17 09:41:32 +08:00
ngram.py [TRTLLM-6668][feat] Enable overlap scheduler for two-model spec decoding (#7651) 2025-09-16 07:33:44 +08:00
utils.py [None][feat] MultiLayer Eagle (#7234) 2025-09-04 10:49:13 -04:00