mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* Fix hang bug when KV cache is low Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com> * Review comments Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com> * Fix attentiondp typo Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com> * Add CI test for this case Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com> * fix: Fix the insertion order for responder futures Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> * fix: Fix disagg CPP Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> --------- Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com> Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| blockKeyTest.cpp | ||
| cacheTransceiverTest.cpp | ||
| CMakeLists.txt | ||
| guidedDecoderTest.cpp | ||
| modelSpec.cpp | ||
| modelSpec.h | ||
| modelSpecBinding.cpp | ||
| peftCacheManagerTest.cpp | ||
| trtEncoderModelTest.cpp | ||
| trtGptModelRealDecoderTest.cpp | ||
| trtGptModelTest.cpp | ||