TensorRT-LLMs/cpp/tensorrt_llm/batch_manager
Robin Kobus d7386d14a8
refactor: Simplify disableLookahead and improve numDecodingEngineTokens handling (#3103)
* refactor: Simplifiy disableLookahead method

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* Update DecoderBuffers comments

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Move numDecodingEngineTokens to DecoderState

This commit introduces new methods in the DecoderState class to manage the number of tokens for each request in a batch. The following changes were made:
- Added `getNumDecodingEngineTokens()` to retrieve the number of tokens for all requests.
- Added `getNumDecodingEngineTokens(SizeType32 batchIdx)` to get the token count for a specific request.
- Added `setNumDecodingEngineTokens(SizeType32 batchIdx, SizeType32 numTokens)` to set the token count for a specific request.
- Updated the setup method to initialize the token count vector based on the maximum batch size.
- Refactored the `CreateNewDecoderRequests` class to utilize the new token management methods, improving clarity and maintainability.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Improve shape variables in DecoderState

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-04-01 18:47:31 +08:00
..
utils Fix logits dtype in assert (#3038) 2025-03-25 10:35:21 +08:00
allocateKvCache.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
assignReqSeqSlots.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
cacheFormatter.cpp feat: Add BW measurement (#3070) 2025-03-28 10:53:00 +08:00
cacheFormatter.h feat: Add BW measurement (#3070) 2025-03-28 10:53:00 +08:00
cacheTransceiver.cpp feat: Add BW measurement (#3070) 2025-03-28 10:53:00 +08:00
capacityScheduler.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
CMakeLists.txt Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
contextProgress.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
createNewDecoderRequests.cpp refactor: Simplify disableLookahead and improve numDecodingEngineTokens handling (#3103) 2025-04-01 18:47:31 +08:00
dataTransceiver.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
dataTransceiver.h feat: Add BW measurement (#3070) 2025-03-28 10:53:00 +08:00
dataTransceiverImpl.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
dataTransceiverImpl.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
decoderBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
encoderBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
encoderBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
evictionPolicy.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
generateRequestOptions.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
guidedDecoder.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
handleContextLogits.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
handleGenerationLogits.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheEventManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
kvCacheManager.cpp fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983) 2025-03-24 22:49:52 +08:00
kvCacheTransferManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
llmRequest.cpp Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
logitsPostProcessor.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
loraBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
loraBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
makeDecodingBatchInputOutput.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
medusaBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
microBatchScheduler.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mlaCacheFormatter.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mlaCacheFormatter.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
pauseRequests.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
peftCacheManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
promptTuningBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
promptTuningBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
rnnStateBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
rnnStateBuffers.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
rnnStateManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
runtimeBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
sequenceSlotManager.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
transformerBuffers.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
trtEncoderModel.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
trtEncoderModel.h Revert "refactor: Replace DecoderFinishedEvent with CudaEvent in decoder clas…" (#3183) 2025-04-01 12:49:27 +08:00
trtGptModel.h fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983) 2025-03-24 22:49:52 +08:00
trtGptModelFactory.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
trtGptModelInflightBatching.cpp refactor: Simplify disableLookahead and improve numDecodingEngineTokens handling (#3103) 2025-04-01 18:47:31 +08:00
trtGptModelInflightBatching.h Revert "refactor: Replace DecoderFinishedEvent with CudaEvent in decoder clas…" (#3183) 2025-04-01 12:49:27 +08:00
trtGptModelV1.cpp v1.2 (#3082) 2025-03-26 23:31:29 +08:00
trtGptModelV1.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00