TensorRT-LLMs/tensorrt_llm/runtime
Yi Zhang 361ff36784
[None][feat] Use new index api, add block scale support, fix max_seq_len esitmation, add flash mla support (#11334)
Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
Signed-off-by: yizhang-nv <187001205+yizhang-nv@users.noreply.github.com>
2026-02-15 21:40:54 +08:00
..
kv_cache_manager_v2 [None][feat] Use new index api, add block scale support, fix max_seq_len esitmation, add flash mla support (#11334) 2026-02-15 21:40:54 +08:00
memory_pools Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
processor_wrapper Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
__init__.py [None][feat] New KVCacheManagerV2 APIs for Transceiver (#11003) 2026-01-30 18:09:53 +08:00
enc_dec_model_runner.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
generation.py [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
kv_cache_manager.py open source 7f370deb0090d885d7518c2b146399ba3933c004 (#2273) 2024-09-30 13:51:19 +02:00
medusa_utils.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
model_runner_cpp.py [TRTLLM-8684][chore] Migrate BuildConfig to Pydantic, add a Python wrapper for KVCacheType enum (#8330) 2025-10-28 09:17:26 -07:00
model_runner.py [#6425][fix] address CUDA stream sync issue in ModelRunnerCPP (#6426) 2025-12-12 13:33:22 +08:00
multimodal_model_runner.py [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
redrafter_utils.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
session.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00