TensorRT-LLMs/cpp/tensorrt_llm/executor
Iman Tabrizian af04b6f6aa
bug: Fix hang bug when context server doesn't have enough capacity for KV Cache (#3095)
* Fix hang bug when KV cache is low

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Review comments

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Fix attentiondp typo

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Add CI test for this case

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* fix: Fix the insertion order for responder futures

Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>

* fix: Fix disagg CPP

Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>

---------

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
2025-04-21 15:16:55 +08:00
..
cache_transmission bug: Fix hang bug when context server doesn't have enough capacity for KV Cache (#3095) 2025-04-21 15:16:55 +08:00
CMakeLists.txt Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
contextPhaseParams.cpp Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
debugConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
decodingConfig.cpp Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
disaggServerUtil.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
dynamicBatchConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
dynamicBatchTuner.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
dynamicBatchTuner.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
executor.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
executorConfig.cpp feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
executorImpl.cpp feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
executorImpl.h feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
executorKVCacheEventManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
extendedRuntimePerfKnobConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
guidedDecodingConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
guidedDecodingParams.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
intervalSet.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
jsonSerialization.cpp feat: Add BW measurement (#3070) 2025-03-28 10:53:00 +08:00
kvCacheConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheRetentionConfig.cpp chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
logitsPostProcessorConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
loraConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
model.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mropeConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
orchestratorConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
orchestratorUtils.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
outputConfig.cpp feat: Allow individual gatherContext for each additional output (#3374) 2025-04-12 17:00:36 +08:00
parallelConfig.cpp feat: Add numNodes to ParallelConfig (#3346) 2025-04-13 13:55:04 +02:00
peftCacheConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
promptTuningConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
request.cpp Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
requestImpl.h Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
requestUtils.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
requestUtils.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
requestWithId.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
requestWithId.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
response.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
responseImpl.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
samplingConfig.cpp feat: Variable-Beam-Width-Search (VBWS) Part2 (#3133) 2025-04-02 12:31:28 +08:00
schedulerConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
serialization.cpp bug: Fix hang bug when context server doesn't have enough capacity for KV Cache (#3095) 2025-04-21 15:16:55 +08:00
serializeUtils.h feat: Allow individual gatherContext for each additional output (#3374) 2025-04-12 17:00:36 +08:00
speculativeDecodingConfig.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
tensor.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
types.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00