TensorRT-LLMs/cpp
Yueh-Ting (eop) Chen 4882815fa1
[TLLM-6777][feature] Support SWA KV cache reuse OOW block detach (#7922)
This MR is a continuation of #6768. In the previous merge request,
OOW (out-of-window) blocks are only detached when reuse is not enabled,
that is, the block movement behavior is identical between SWA and full
attention when reuse is enabled.

This merge request attempts to enable OOW block detach when reuse is
enabled. The required changes are:

- Let KV cache manager keep track of which block is used by which
  sequence
- Remove restriction for the eviction policy to be able to release a
  non-leaf block

Along with the development, bugs inside freeChildren and offload
mechanism under getFreeBlock is resolved because they will affect the
functionality this merge request is trying to achieve.

When a block goes OOW, it is released from the sequence, it will be
available to be reclaimed and the block is held by the eviction policy
for another sequence to acquire upon calling. On the other hand, we
want to potentially store the sequence for reuse. To safely achieve
this, the record of block ownership is done under
WindowBlockManager::getFreeBlock. If the block acquired was originally
owned by another sequence that is live inside the manager, then we
invalidate the sequence for store for reuse.

At the end of a sequence (when removeSequence is called toward it),
the KV cache manager will check if the sequence has all blocks not
reclaimed by another sequence. If so, then the sequence is safe to
be stored for reuse and store for reuse action will be performed.

Signed-off-by: eopXD <yuehtingc@nvidia.com>
2025-10-13 09:18:12 -07:00
..
cmake [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
include/tensorrt_llm [TLLM-6777][feature] Support SWA KV cache reuse OOW block detach (#7922) 2025-10-13 09:18:12 -07:00
kernels [None][feat] GPT-OSS Sm120/Sm121 Support (#7937) 2025-10-06 16:59:06 -04:00
micro_benchmarks [TRTLLM-6286] [perf] Add NoSmem epilogue schedule and dynamic cluster shape for sm10x group gemm (#7757) 2025-09-21 11:38:17 +08:00
tensorrt_llm [TLLM-6777][feature] Support SWA KV cache reuse OOW block detach (#7922) 2025-10-13 09:18:12 -07:00
tests [TLLM-6777][feature] Support SWA KV cache reuse OOW block detach (#7922) 2025-10-13 09:18:12 -07:00
CMakeLists.txt [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
libnuma_conan.py fix cuda driver link issue with driver version less than 12.3 (#5025) 2025-06-10 15:27:39 +08:00