TensorRT-LLMs/cpp/include/tensorrt_llm
Ziyi Xiong 66030ef815
[TRTLLM-6452][feat]: Two-model engine KV cache reuse support (#6133)
Signed-off-by: ziyixiong-nv <fxiong@nvidia.com>
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
2025-07-19 13:17:15 +08:00
..
batch_manager [TRTLLM-6452][feat]: Two-model engine KV cache reuse support (#6133) 2025-07-19 13:17:15 +08:00
common [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
deep_gemm fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
executor chore:[BREAKING CHANGE] use cacheTransceiverConfig as knobs for disagg service (#5234) 2025-07-17 17:42:07 +08:00
kernels feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
layers v1.2 (#3082) 2025-03-26 23:31:29 +08:00
plugins/api Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
runtime refactor: decoding inputs (#5679) 2025-07-06 08:21:02 +02:00