TensorRT-LLMs/cpp/tensorrt_llm/pybind/batch_manager
Netanel Haber da0b0e0ee3
fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983)
* fix variable window size reuse - disable when *min attention window* starts sliding, not max

* isPreCyclic -> isCyclic, and invert logic, for clarity

* getDecoderState()

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
2025-03-24 22:49:52 +08:00
..
algorithms.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
algorithms.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
bindings.cpp Update (#2978) 2025-03-23 16:39:35 +08:00
bindings.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
buffers.cpp Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
buffers.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
cacheTransceiver.cpp Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
cacheTransceiver.h Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
kvCacheManager.cpp fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983) 2025-03-24 22:49:52 +08:00
kvCacheManager.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
llmRequest.cpp Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
llmRequest.h Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00