TensorRT-LLMs/cpp/include/tensorrt_llm/executor
Iman Tabrizian af04b6f6aa
bug: Fix hang bug when context server doesn't have enough capacity for KV Cache (#3095)
* Fix hang bug when KV cache is low

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Review comments

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Fix attentiondp typo

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Add CI test for this case

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* fix: Fix the insertion order for responder futures

Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>

* fix: Fix disagg CPP

Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>

---------

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
2025-04-21 15:16:55 +08:00
..
cacheCommunicator.h chore: Ucx ip port remove mpi depend (#3101) 2025-04-02 09:42:29 +08:00
dataTransceiverState.h bug: Fix hang bug when context server doesn't have enough capacity for KV Cache (#3095) 2025-04-21 15:16:55 +08:00
disaggServerUtil.h Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
executor.h feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
serialization.h feat: Allow individual gatherContext for each additional output (#3374) 2025-04-12 17:00:36 +08:00
tensor.h Update TensorRT-LLM (#1918) 2024-07-09 14:42:22 +08:00
types.h Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00