TensorRT-LLMs/cpp/tensorrt_llm/pybind/batch_manager
Yilin Fan 01423ac183
[None][feat] perf_metrics endpoint functionality improvement (#8005)
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: nv-yilinf <206948969+nv-yilinf@users.noreply.github.com>
2025-10-02 17:43:25 -07:00
..
algorithms.cpp [TRTLLM-7028][feat] Enable guided decoding with speculative decoding (part 2: one-model engine) (#6948) 2025-09-03 15:16:11 -07:00
algorithms.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
bindings.cpp [None][feat] perf_metrics endpoint functionality improvement (#8005) 2025-10-02 17:43:25 -07:00
bindings.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
cacheTransceiver.cpp [None][feat] Support for cancelling requests with disaggregation (#8114) 2025-10-02 11:04:26 -07:00
cacheTransceiver.h Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
kvCacheConnector.cpp [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
kvCacheConnector.h [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
kvCacheManager.cpp [TRTLLM-6106][feat] Add support for KVCache transfer from KVCache reuse path (#6348) 2025-09-27 19:29:30 -04:00
kvCacheManager.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
llmRequest.cpp [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00
llmRequest.h [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00