TensorRT-LLMs/cpp/tensorrt_llm/batch_manager
amitz-nv 64c878818b
[TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter (#6786)
Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
2025-08-11 14:31:39 -04:00
..
utils refactor: Speculative decoding buffers part 2 (#5316) 2025-06-27 17:41:48 +02:00
allocateKvCache.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
assignReqSeqSlots.cpp refactor: Remove enforced sorted order of batch slots (#3502) 2025-07-14 17:23:02 +02:00
cacheFormatter.cpp [None][chore] optimize kv cache transfer for context TEP and gen DEP (#6657) 2025-08-07 11:36:05 +08:00
cacheFormatter.h [TRTLLM-6549] chore: record delay introduced by disaggregated serving in kv cache measure (#6135) 2025-07-30 10:39:40 +08:00
cacheTransBuffer.cpp chore:[BREAKING CHANGE] use cacheTransceiverConfig as knobs for disagg service (#5234) 2025-07-17 17:42:07 +08:00
cacheTransBuffer.h chore:[BREAKING CHANGE] use cacheTransceiverConfig as knobs for disagg service (#5234) 2025-07-17 17:42:07 +08:00
cacheTransceiver.cpp [None][chore] ucx establish connection with zmq (#6090) 2025-08-05 02:50:45 -04:00
capacityScheduler.cpp refactor: Scheduling based on KV cache state (#4865) 2025-06-16 08:14:58 +02:00
CMakeLists.txt feat: Support structural tag in C++ runtime and upgrade xgrammar to 0.1.21 (#6408) 2025-07-31 09:53:52 +08:00
contextProgress.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
createNewDecoderRequests.cpp [None][refactor] Simplify finish reasons handling in DecoderState (#6524) 2025-08-02 07:17:43 +02:00
dataTransceiver.cpp feat: use session abstraction in data transceiver and cache formatter (#5611) 2025-07-16 13:52:44 +08:00
dataTransceiver.h [TRTLLM-6549] chore: record delay introduced by disaggregated serving in kv cache measure (#6135) 2025-07-30 10:39:40 +08:00
dataTransceiverImpl.cpp [TRTLLM-6549] chore: record delay introduced by disaggregated serving in kv cache measure (#6135) 2025-07-30 10:39:40 +08:00
dataTransceiverImpl.h feat: use session abstraction in data transceiver and cache formatter (#5611) 2025-07-16 13:52:44 +08:00
decoderBuffers.cpp refactor: Enhanced handling of decoder requests and logits within the batch manager (#6055) 2025-07-18 12:12:08 +02:00
encoderBuffers.cpp Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
encoderBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
evictionPolicy.cpp [JIRA-5226219][fix] Fix Bug in KV cache manager (#4596) 2025-05-29 22:03:20 -07:00
guidedDecoder.cpp feat: Support structural tag in C++ runtime and upgrade xgrammar to 0.1.21 (#6408) 2025-07-31 09:53:52 +08:00
handleContextLogits.cpp refactor: Enhanced handling of decoder requests and logits within the batch manager (#6055) 2025-07-18 12:12:08 +02:00
handleGenerationLogits.cpp refactor: Enhanced handling of decoder requests and logits within the batch manager (#6055) 2025-07-18 12:12:08 +02:00
kvCacheEventManager.cpp feat: KV events for sliding window attention (#5580) 2025-07-05 06:05:20 +08:00
kvCacheManager.cpp fix: remove duplicate layer multiplication in KV cache size calculation (#6481) 2025-07-31 22:34:34 -04:00
kvCacheTransferManager.cpp [fix] Fix illegal mem access and possible accuracy lose. Cherry-pick … (#5017) 2025-06-09 17:50:57 +08:00
llmRequest.cpp [TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter (#6786) 2025-08-11 14:31:39 -04:00
logitsPostProcessor.cpp refactor: Enhanced handling of decoder requests and logits within the batch manager (#6055) 2025-07-18 12:12:08 +02:00
loraBuffers.cpp fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
loraBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
makeDecodingBatchInputOutput.cpp refactor: Enhanced handling of decoder requests and logits within the batch manager (#6055) 2025-07-18 12:12:08 +02:00
medusaBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
microBatchScheduler.cpp [nvbugs/5274894] fix: Sort requests for functional correctness and performance (adapted from #4608) (#4621) 2025-05-26 17:10:55 +08:00
mlaCacheFormatter.cpp [None][chore] optimize kv cache transfer for context TEP and gen DEP (#6657) 2025-08-07 11:36:05 +08:00
mlaCacheFormatter.h [TRTLLM-6549] chore: record delay introduced by disaggregated serving in kv cache measure (#6135) 2025-07-30 10:39:40 +08:00
pauseRequests.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
peftCacheManager.cpp [TRTLLM-6683][feat] Support LoRA reload CPU cache evicted adapter (#6786) 2025-08-11 14:31:39 -04:00
promptTuningBuffers.cpp perf: Removing initializing ptuning buffers to zero (#4915) 2025-06-09 21:57:21 -04:00
rnnStateBuffers.cpp [TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092) 2025-05-14 23:10:04 +02:00
rnnStateBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
rnnStateManager.cpp fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
runtimeBuffers.cpp Revert "feat: nanobind bindings (#5961)" (#6160) 2025-07-18 10:12:54 +08:00
scheduledBlocksManager.h refactor: Scheduling based on KV cache state (#4865) 2025-06-16 08:14:58 +02:00
sequenceSlotManager.cpp refactor: Remove enforced sorted order of batch slots (#3502) 2025-07-14 17:23:02 +02:00
transformerBuffers.cpp refactor: remove batch_manager::KvCacheConfig and use executor::KvCacheConfig instead (#5384) 2025-06-26 19:45:52 +08:00
trtEncoderModel.cpp refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtEncoderModel.h refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtGptModel.h refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtGptModelFactory.h refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtGptModelInflightBatching.cpp [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
trtGptModelInflightBatching.h [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
updateDecoderBuffers.cpp refactor: Speculative decoding buffers part 2 (#5316) 2025-06-27 17:41:48 +02:00