TensorRT-LLMs/cpp/tensorrt_llm/batch_manager
narutolhy 41ef1ade19
feat:enable kvcache to be reused during request generation (#4028)
Signed-off-by: narutolhy <582909902@qq.com>
2025-07-10 22:18:01 +09:00
..
utils refactor: Speculative decoding buffers part 2 (#5316) 2025-06-27 17:41:48 +02:00
allocateKvCache.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
assignReqSeqSlots.cpp [TRTLLM-6104] feat: add request_perf_metrics to LLMAPI (#5497) 2025-06-27 17:03:05 +02:00
cacheFormatter.cpp Cache transceiver support VSWA (#5505) 2025-07-05 01:18:42 +09:00
cacheFormatter.h Cache transceiver support VSWA (#5505) 2025-07-05 01:18:42 +09:00
cacheTransBuffer.cpp Cache transceiver support VSWA (#5505) 2025-07-05 01:18:42 +09:00
cacheTransBuffer.h Solve underallocation in VSWA+/VGQA (#4667) 2025-06-12 12:12:46 +08:00
cacheTransceiver.cpp fix: Disaggregate serving with attention DP (#4993) 2025-07-08 16:15:03 +08:00
capacityScheduler.cpp refactor: Scheduling based on KV cache state (#4865) 2025-06-16 08:14:58 +02:00
CMakeLists.txt chore: cleanup GDS Cmake interface (#4928) 2025-06-10 17:25:43 +08:00
contextProgress.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
createNewDecoderRequests.cpp fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
dataTransceiver.cpp Fabric Memory for KV Cache Transfer (#4717) 2025-05-30 15:50:21 +08:00
dataTransceiver.h chore: rename IOFormatter to BaseCacheFormatter (#5068) 2025-06-12 10:50:14 +08:00
dataTransceiverImpl.cpp Cache transceiver support VSWA (#5505) 2025-07-05 01:18:42 +09:00
dataTransceiverImpl.h Cache transceiver support VSWA (#5505) 2025-07-05 01:18:42 +09:00
decoderBuffers.cpp refactor: Speculative decoding buffers part 2 (#5316) 2025-06-27 17:41:48 +02:00
encoderBuffers.cpp Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
encoderBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
evictionPolicy.cpp [JIRA-5226219][fix] Fix Bug in KV cache manager (#4596) 2025-05-29 22:03:20 -07:00
guidedDecoder.cpp test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
handleContextLogits.cpp refactor: Speculative decoding buffers part 2 (#5316) 2025-06-27 17:41:48 +02:00
handleGenerationLogits.cpp refactor: Speculative decoding buffers part 2 (#5316) 2025-06-27 17:41:48 +02:00
kvCacheEventManager.cpp feat: KV events for sliding window attention (#5580) 2025-07-05 06:05:20 +08:00
kvCacheManager.cpp feat:enable kvcache to be reused during request generation (#4028) 2025-07-10 22:18:01 +09:00
kvCacheTransferManager.cpp [fix] Fix illegal mem access and possible accuracy lose. Cherry-pick … (#5017) 2025-06-09 17:50:57 +08:00
llmRequest.cpp Re-implement LlmResponse in Python to reduce host overhead of pybind (#5224) 2025-06-17 21:28:09 +08:00
logitsPostProcessor.cpp refactor: Speculative decoding buffers part 2 (#5316) 2025-06-27 17:41:48 +02:00
loraBuffers.cpp fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
loraBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
makeDecodingBatchInputOutput.cpp refactor: decoding inputs (#5679) 2025-07-06 08:21:02 +02:00
medusaBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
microBatchScheduler.cpp [nvbugs/5274894] fix: Sort requests for functional correctness and performance (adapted from #4608) (#4621) 2025-05-26 17:10:55 +08:00
mlaCacheFormatter.cpp fix: Disaggregate serving with attention DP (#4993) 2025-07-08 16:15:03 +08:00
mlaCacheFormatter.h fix: Disaggregate serving with attention DP (#4993) 2025-07-08 16:15:03 +08:00
pauseRequests.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
peftCacheManager.cpp fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
promptTuningBuffers.cpp perf: Removing initializing ptuning buffers to zero (#4915) 2025-06-09 21:57:21 -04:00
rnnStateBuffers.cpp [TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092) 2025-05-14 23:10:04 +02:00
rnnStateBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
rnnStateManager.cpp fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
runtimeBuffers.cpp fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
scheduledBlocksManager.h refactor: Scheduling based on KV cache state (#4865) 2025-06-16 08:14:58 +02:00
sequenceSlotManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
transformerBuffers.cpp refactor: remove batch_manager::KvCacheConfig and use executor::KvCacheConfig instead (#5384) 2025-06-26 19:45:52 +08:00
trtEncoderModel.cpp refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtEncoderModel.h refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtGptModel.h refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtGptModelFactory.h refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtGptModelInflightBatching.cpp feat:enable kvcache to be reused during request generation (#4028) 2025-07-10 22:18:01 +09:00
trtGptModelInflightBatching.h feat:enable kvcache to be reused during request generation (#4028) 2025-07-10 22:18:01 +09:00
updateDecoderBuffers.cpp refactor: Speculative decoding buffers part 2 (#5316) 2025-06-27 17:41:48 +02:00