TensorRT-LLMs/tensorrt_llm/_torch/speculative
Mike Iovine 078e907b16
[https://nvbugs/5455651][fix] Make ngram use XQA attention on Blackwell (#6873)
Signed-off-by: Michael Iovine <miovine@nvidia.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Signed-off-by: Mike Iovine <mike.iovine7@gmail.com>
2025-08-14 18:36:19 -04:00
..
__init__.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
auto_heuristic.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
drafter.py [None][feat] Optimize CUDA graph memory usage for spec decode cases (#6718) 2025-08-08 13:56:53 -04:00
eagle3.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
interface.py [https://nvbugs/5455651][fix] Make ngram use XQA attention on Blackwell (#6873) 2025-08-14 18:36:19 -04:00
model_drafter.py [None][feat] Clean up ngram auto mode, add max_concurrency to configs (#6676) 2025-08-07 12:51:47 -04:00
mtp.py [TRTLLM-6853][feat] refactor deepseekv3 model (#6698) 2025-08-14 11:03:17 -04:00
ngram.py [https://nvbugs/5452167][fix] Fix ngram padding issue (#6837) 2025-08-13 11:23:16 +08:00
utils.py [TRTLLM-6853][feat] refactor deepseekv3 model (#6698) 2025-08-14 11:03:17 -04:00