TensorRT-LLMs/cpp
Ziyi Xiong 66030ef815
[TRTLLM-6452][feat]: Two-model engine KV cache reuse support (#6133)
Signed-off-by: ziyixiong-nv <fxiong@nvidia.com>
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
2025-07-19 13:17:15 +08:00
..
cmake feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
include/tensorrt_llm [TRTLLM-6452][feat]: Two-model engine KV cache reuse support (#6133) 2025-07-19 13:17:15 +08:00
kernels update spec_dec (#6079) 2025-07-16 17:50:43 +08:00
micro_benchmarks feat: Add support for benchmarking individual gemms in MOE benchmark (#6080) 2025-07-18 09:00:12 +12:00
tensorrt_llm [https://nvbugs/5393961][fix] record kv-cache size in MLACacheFormatter (#6181) 2025-07-19 05:06:45 +08:00
tests refactor: Enhanced handling of decoder requests and logits within the batch manager (#6055) 2025-07-18 12:12:08 +02:00
CMakeLists.txt Revert "feat: nanobind bindings (#5961)" (#6160) 2025-07-18 10:12:54 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00