TensorRT-LLMs/cpp/tensorrt_llm/batch_manager
Enwei Zhu fc7a81ceb0
test: Add LLGuidance test and refine guided decoding (#5348)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-06-25 14:12:56 +08:00
..
utils refactor: unique_ptr instead of shared_ptr (#4697) 2025-05-29 22:49:35 +02:00
allocateKvCache.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
assignReqSeqSlots.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
cacheFormatter.cpp chore: rename IOFormatter to BaseCacheFormatter (#5068) 2025-06-12 10:50:14 +08:00
cacheFormatter.h chore: rename IOFormatter to BaseCacheFormatter (#5068) 2025-06-12 10:50:14 +08:00
cacheTransBuffer.cpp Solve underallocation in VSWA+/VGQA (#4667) 2025-06-12 12:12:46 +08:00
cacheTransBuffer.h Solve underallocation in VSWA+/VGQA (#4667) 2025-06-12 12:12:46 +08:00
cacheTransceiver.cpp chore: rename IOFormatter to BaseCacheFormatter (#5068) 2025-06-12 10:50:14 +08:00
capacityScheduler.cpp refactor: Scheduling based on KV cache state (#4865) 2025-06-16 08:14:58 +02:00
CMakeLists.txt chore: cleanup GDS Cmake interface (#4928) 2025-06-10 17:25:43 +08:00
contextProgress.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
createNewDecoderRequests.cpp refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
dataTransceiver.cpp Fabric Memory for KV Cache Transfer (#4717) 2025-05-30 15:50:21 +08:00
dataTransceiver.h chore: rename IOFormatter to BaseCacheFormatter (#5068) 2025-06-12 10:50:14 +08:00
dataTransceiverImpl.cpp chore: rename IOFormatter to BaseCacheFormatter (#5068) 2025-06-12 10:50:14 +08:00
dataTransceiverImpl.h chore: rename IOFormatter to BaseCacheFormatter (#5068) 2025-06-12 10:50:14 +08:00
decoderBuffers.cpp refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
encoderBuffers.cpp Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
encoderBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
evictionPolicy.cpp [JIRA-5226219][fix] Fix Bug in KV cache manager (#4596) 2025-05-29 22:03:20 -07:00
guidedDecoder.cpp test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
handleContextLogits.cpp refactor: Update decoder buffer and logits management (#4450) 2025-06-18 08:10:32 +08:00
handleGenerationLogits.cpp refactor: Update decoder buffer and logits management (#4450) 2025-06-18 08:10:32 +08:00
kvCacheEventManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheManager.cpp refactor: Scheduling based on KV cache state (#4865) 2025-06-16 08:14:58 +02:00
kvCacheTransferManager.cpp [fix] Fix illegal mem access and possible accuracy lose. Cherry-pick … (#5017) 2025-06-09 17:50:57 +08:00
llmRequest.cpp Re-implement LlmResponse in Python to reduce host overhead of pybind (#5224) 2025-06-17 21:28:09 +08:00
logitsPostProcessor.cpp refactor: Update decoder buffer and logits management (#4450) 2025-06-18 08:10:32 +08:00
loraBuffers.cpp fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
loraBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
makeDecodingBatchInputOutput.cpp refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
medusaBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
microBatchScheduler.cpp [nvbugs/5274894] fix: Sort requests for functional correctness and performance (adapted from #4608) (#4621) 2025-05-26 17:10:55 +08:00
mlaCacheFormatter.cpp Kv cache transfer support duplicate heads (#4929) 2025-06-09 14:11:19 +08:00
mlaCacheFormatter.h chore: rename IOFormatter to BaseCacheFormatter (#5068) 2025-06-12 10:50:14 +08:00
pauseRequests.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
peftCacheManager.cpp fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
promptTuningBuffers.cpp perf: Removing initializing ptuning buffers to zero (#4915) 2025-06-09 21:57:21 -04:00
rnnStateBuffers.cpp [TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092) 2025-05-14 23:10:04 +02:00
rnnStateBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
rnnStateManager.cpp fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
runtimeBuffers.cpp refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
scheduledBlocksManager.h refactor: Scheduling based on KV cache state (#4865) 2025-06-16 08:14:58 +02:00
sequenceSlotManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
transformerBuffers.cpp refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
trtEncoderModel.cpp refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtEncoderModel.h refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtGptModel.h refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtGptModelFactory.h refactor: remove TrtGptModelOptionalParams (#5165) 2025-06-20 10:31:40 +02:00
trtGptModelInflightBatching.cpp refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
trtGptModelInflightBatching.h refactor: manage cache indirection in decoder state (#5315) 2025-06-24 09:15:59 +02:00
updateDecoderBuffers.cpp refactor: Separate DecoderState from GptDecoderBatched (#4700) 2025-06-03 09:42:01 +02:00