TensorRT-LLMs/cpp/tensorrt_llm/pybind/batch_manager
shuyixiong 70e4d72ffa
[TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302)
Signed-off-by: shuyix <219646547+shuyixiong@users.noreply.github.com>
Co-authored-by: Liwei Ma <liweim@nvidia.com>
Co-authored-by: Jonas Yang CN <joyang@nvidia.com>
2025-11-04 10:19:24 -08:00
..
algorithms.cpp [TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948) 2025-09-03 15:16:11 -07:00
algorithms.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
bindings.cpp [None][feat] add detailed KV cache transfer time breakdown (#8521) 2025-10-29 10:11:09 +08:00
bindings.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
cacheTransceiver.cpp [TRTLLM-7078][chore] optimal kvcache transfer for VWSA (#7952) 2025-10-24 08:58:16 -04:00
cacheTransceiver.h Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
kvCacheConnector.cpp [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
kvCacheConnector.h [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
kvCacheManager.cpp [TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302) 2025-11-04 10:19:24 -08:00
kvCacheManager.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
llmRequest.cpp [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00
llmRequest.h [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00