TensorRT-LLMs/cpp/include/tensorrt_llm
Richard Huo ce580ce4f5
[None][feat] KV Cache Connector API (#7228)
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Signed-off-by: richardhuo-nv <rihuo@nvidia.com>
Co-authored-by: jthomson04 <jwillthomson19@gmail.com>
Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
2025-08-28 23:09:27 -04:00
..
batch_manager [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
common [None][fix] Fix const modifier inconsistency in log function declaration/implementation (#6679) 2025-08-21 11:08:11 +08:00
deep_gemm [TRTLLM-7457][ci] Update unittest parallel config (#7297) 2025-08-29 09:28:04 +08:00
executor fix/improve kvcache allocation in PyTorch runtime (#5933) 2025-08-26 12:40:22 +08:00
kernels fix: compatibility with CUDA < 12.9 on __CUDA_ARCH_SPECIFIC__ macro (#5917) 2025-07-28 16:02:26 +08:00
layers v1.2 (#3082) 2025-03-26 23:31:29 +08:00
plugins/api Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
runtime [None][refactor] Simplify decoder state initialization for speculative decoding (#6869) 2025-08-22 18:44:17 +02:00