TensorRT-LLMs/cpp/include/tensorrt_llm/batch_manager
Daniel Cámpora 1299f27c74
fix: Fix C++ decoder synchronization in PyTorch (#3106)
* Use updateDecoderBuffers in python decoder.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Fix synchronize in trtllm decoder.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Enable by default.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Use guided_decoder to setup seqslots and free them.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Use always decode_async and update_requests.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Update decoder buffers.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Fix speculative decoding tests.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Send new_tensors_host instead of assuming dict.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Make default False in enable_trtllm_decoder.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Partially fix mtp, partially fix py_executor.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Update request states before sending disagg ctx cache.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Fix disagg test for torch decoder.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Make isend_tensor_list and recv_tensor_list for sending the tensors_host.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Formatting.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Fix rebase.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Add disagg serving case to guided decoder.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Get overlap scheduling to work.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Update cutlass to main.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Update after rebasing.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Formatting.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Update to use decode async and update requests.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Properly pass information to update_requests

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Formatting.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Make disaggregated serving a step closer to working.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Fix rebase.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Fix rebase and format.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Copy new device tokens more pythonic.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Restore MTP add dummy reqs.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Add ordereddict import to py_executor.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Formatting.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Added seq slot manager. Add test.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Use transmission for single tensor except when list of tensors is received.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Add TRTLLMDecoder allocation to estimate max kv cache tokens.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Add stream synchronization

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Formatting.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Make memory calculation of decoder adapt to the chosen decoder. Recognize decoder option passed in executorconfig. Make overlap scheduler test run on TinyLlama.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Format

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Add decoder creation to estimate max kv.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Formatting.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

* Update submodule UCXX inline with main.

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>

---------

Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
2025-04-23 23:55:27 +08:00
..
allocateKvCache.h Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
assignReqSeqSlots.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
cacheTransceiver.h bug: Fix hang bug when context server doesn't have enough capacity for KV Cache (#3095) 2025-04-21 15:16:55 +08:00
capacityScheduler.h feat: allocate minimal blocks per window size (#3028) 2025-04-17 16:04:57 +08:00
common.h open source 4dbf696ae9b74a26829d120b67ab8443d70c8e58 (#2297) 2024-10-08 12:19:19 +02:00
contextProgress.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
createNewDecoderRequests.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
decoderBuffers.h refactor: Introduce DecoderOutputBuffers per batch (#3506) 2025-04-22 12:25:53 +08:00
evictionPolicy.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
generateRequestOptions.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
guidedDecoder.h Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
handleContextLogits.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
handleGenerationLogits.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheConfig.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheEventManager.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
kvCacheManager.h bind block key and hasher (#3712) 2025-04-21 18:50:57 +08:00
kvCacheTransferManager.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheUtils.h feat: allocate minimal blocks per window size (#3028) 2025-04-17 16:04:57 +08:00
llmRequest.h feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
logitsPostProcessor.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
makeDecodingBatchInputOutput.h refactor: batch slot management in decoder classes (#3300) 2025-04-13 05:05:13 +08:00
medusaBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
microBatchScheduler.h Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
pauseRequests.h Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
peftCacheManager.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
peftCacheManagerConfig.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
promptTuningBuffers.h feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
rnnStateManager.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
runtimeBuffers.h feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
sequenceSlotManager.h Update TensorRT-LLM (#2413) 2024-11-05 16:27:06 +08:00
transformerBuffers.h Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
trtGptModelOptionalParams.h feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
updateDecoderBuffers.h fix: Fix C++ decoder synchronization in PyTorch (#3106) 2025-04-23 23:55:27 +08:00