TensorRT-LLMs/tensorrt_llm/runtime
Yuan Tong db8dc97b7b
[None][fix] Migrate to new cuda binding package name (#6700)
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-08-07 16:29:55 -04:00
..
memory_pools Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
processor_wrapper Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
__init__.py Update TensorRT-LLM (#2110) 2024-08-13 22:34:33 +08:00
enc_dec_model_runner.py feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034) 2025-05-16 04:16:53 +08:00
generation.py [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
kv_cache_manager.py open source 7f370deb0090d885d7518c2b146399ba3933c004 (#2273) 2024-09-30 13:51:19 +02:00
medusa_utils.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
model_runner_cpp.py [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
model_runner.py [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
multimodal_model_runner.py [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
redrafter_utils.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
session.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00