TensorRT-LLMs/cpp/include/tensorrt_llm/batch_manager
Robin Kobus e2a8cbc80b
refactor: manage cache indirection in decoder state (#5315)
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-06-24 09:15:59 +02:00
..
allocateKvCache.h Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
assignReqSeqSlots.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
cacheTransceiver.h Agent interface impl for NIXL (#4125) 2025-05-22 09:09:41 +08:00
capacityScheduler.h fix: max_num_sequences calculation with overlap scheduling (#4532) 2025-06-03 09:31:22 +02:00
common.h open source 4dbf696ae9b74a26829d120b67ab8443d70c8e58 (#2297) 2024-10-08 12:19:19 +02:00
contextProgress.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
createNewDecoderRequests.h refactor: Unify decoder test with e2e worklfow (#5239) 2025-06-17 12:04:58 +02:00
decoderBuffers.h refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
evictionPolicy.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
guidedDecoder.h Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
handleContextLogits.h refactor: Update decoder buffer and logits management (#4450) 2025-06-18 08:10:32 +08:00
handleGenerationLogits.h refactor: Update decoder buffer and logits management (#4450) 2025-06-18 08:10:32 +08:00
kvCacheConfig.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheEventManager.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
kvCacheManager.h refactor: Scheduling based on KV cache state (#4865) 2025-06-16 08:14:58 +02:00
kvCacheTransferManager.h feature: KV Cache GPUDirect Storage (#3209) 2025-05-28 23:27:43 +00:00
kvCacheUtils.h feat: cache reuse support (selective cache transfer) in mla cache formatter (#4749) 2025-06-04 09:56:31 +08:00
llmRequest.h Re-implement LlmResponse in Python to reduce host overhead of pybind (#5224) 2025-06-17 21:28:09 +08:00
logitsPostProcessor.h refactor: Update decoder buffer and logits management (#4450) 2025-06-18 08:10:32 +08:00
makeDecodingBatchInputOutput.h refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
medusaBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
microBatchScheduler.h [TRTLLM-3429] feat: Overlap scheduling in C++ runtime (#3625) 2025-05-06 15:06:46 +02:00
pauseRequests.h Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
peftCacheManager.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
peftCacheManagerConfig.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
promptTuningBuffers.h feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
rnnStateManager.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
runtimeBuffers.h refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
sequenceSlotManager.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
transformerBuffers.h refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
updateDecoderBuffers.h refactor: Separate DecoderState from GptDecoderBatched (#4700) 2025-06-03 09:42:01 +02:00