TensorRT-LLMs/tensorrt_llm/_torch
Netanel Haber da0b0e0ee3
fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983)
* fix variable window size reuse - disable when *min attention window* starts sliding, not max

* isPreCyclic -> isCyclic, and invert logic, for clarity

* getDecoderState()

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
2025-03-24 22:49:52 +08:00
..
attention_backend Update (#2978) 2025-03-23 16:39:35 +08:00
auto_deploy Update (#2978) 2025-03-23 16:39:35 +08:00
compilation Update (#2978) 2025-03-23 16:39:35 +08:00
custom_ops Update (#2978) 2025-03-23 16:39:35 +08:00
models Update (#2978) 2025-03-23 16:39:35 +08:00
modules Update (#2978) 2025-03-23 16:39:35 +08:00
pyexecutor fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983) 2025-03-24 22:49:52 +08:00
speculative Update (#2978) 2025-03-23 16:39:35 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py Update (#2978) 2025-03-23 16:39:35 +08:00
distributed.py Update (#2978) 2025-03-23 16:39:35 +08:00
llm.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
metadata.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
model_config.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
pipeline_interface.py Update (#2978) 2025-03-23 16:39:35 +08:00
utils.py Update (#2978) 2025-03-23 16:39:35 +08:00