TensorRT-LLMs/cpp/tests/unit_tests/batch_manager
Netanel Haber da0b0e0ee3
fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983)
* fix variable window size reuse - disable when *min attention window* starts sliding, not max

* isPreCyclic -> isCyclic, and invert logic, for clarity

* getDecoderState()

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
2025-03-24 22:49:52 +08:00
..
capacitySchedulerTest.cpp fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983) 2025-03-24 22:49:52 +08:00
CMakeLists.txt Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
contextProgressTest.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
evictionPolicyTest.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheManagerTest.cpp fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983) 2025-03-24 22:49:52 +08:00
kvCacheUtilsTest.cpp fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983) 2025-03-24 22:49:52 +08:00
llmRequestTest.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
microBatchSchedulerTest.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
staticThreadPoolTest.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00