TensorRT-LLMs/cpp/tensorrt_llm
Robin Kobus fffb403125
fix: disable KV cache reuse if using attention sink (#3021)
* fix: disable KV cache reuse if using attention sink

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* fix: disable KV cache reuse if sink bubble

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* add comment

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-04-16 03:07:32 +08:00
..
batch_manager fix: disable KV cache reuse if using attention sink (#3021) 2025-04-16 03:07:32 +08:00
common feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
cutlass_extensions/include/cutlass_extensions feat: Update cutlass (#2981) 2025-03-26 22:36:27 +08:00
executor chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels Revert "infra: move nvrtc_wrapper to conan (#3282)" (#3573) 2025-04-15 22:45:13 +08:00
layers fix: Eagle decoding (#3456) 2025-04-11 22:06:38 +08:00
plugins feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
pybind Feat/ Integrate peftCacheManager in PyExecutor creation (#3372) 2025-04-15 15:14:43 +08:00
runtime chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
thop Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
CMakeLists.txt Revert "infra: move nvrtc_wrapper to conan (#3282)" (#3573) 2025-04-15 22:45:13 +08:00