* add bidirectional support and fix EarlyStopDecoder unsqueeze to be compatible with LogitsStorage
Signed-off-by: Rohan Varma <rohanv@nvidia.com>
* run pre-commit
Signed-off-by: Rohan Varma <rohanv@nvidia.com>
* instead of bidirectional flag use ModelConfig.is_generation
Signed-off-by: Rohan Varma <rohanv@nvidia.com>
* fix unit test to extract logits from correct dim
Signed-off-by: Rohan Varma <rohanv@nvidia.com>
---------
Signed-off-by: Rohan Varma <rohanv@nvidia.com>
* Fix TRTLLMSampler.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Added type hint.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
---------
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
chore: restore symmetry of worker start/shutdown
chore: fix return type of cal_max_tokens
chore: type some more return values
fix: free resources before re-claiming
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
* refactor: Copy sequence lengths once in decoder setup
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Update DecoderInputBuffers to remove duplicated buffers
- Renamed and reorganized buffer variables in decoderBuffers.h and decoderBuffers.cpp for better readability.
- Adjusted references in generateRequestOptions.cpp to align with the new buffer structure.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Move getEmbeddingBias to anonymous namespace
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Filter context requests
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: GenerateRequestOptions using more fine-grained functions
- Added a new method `createDecoderRequests` to encapsulate the logic for creating decoder requests from finished context requests.
- Updated the `operator()` method to utilize the new method, improving code clarity and maintainability.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Update TRTLLMDecoder
- Updated the `generate_request_options` call.
- Updated the `make_decoding_batch_input_output` call.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Remove const where we modify input buffers
- Changed `DecoderInputBuffers` parameters from const references to non-const references in multiple functions to allow modifications.
- Updated related function calls to ensure compatibility with the new parameter types.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* fixup! refactor: Copy sequence lengths once in decoder setup
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
---------
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* Add test case for kv memory estimation
* Dump running log into file and parse kv cache memory size from file
* Set bigger peak memory size for mixed percision case and test_ptp_quickstart_advanced_eagle3 case
* Revert change to usage of fraction
* use context manager to guard temp files
Signed-off-by: Hui Gao <huig@nvidia.com>
Prefetching safetensors files so that they are stored in the system file
cache. This significantly speeds up the model weight loading for the
very first run after entering the docker container.
This is beneficial because model weight loading is done layer-by-layer,
which means reading from the safetensors chunk-by-chunk, and that cannot
utilize the internet bandwidth very well, assuming that these files are
stored in some network drives. Instead, loading the whole files in bulk
can achieve higher internet bandwidth utilization.
When running with world_size>1, all ranks collaboratedly prefetch these
files.
In theory, we should add heuristics to decide whether to prefetch the
files or not, but that is beyond the scope of this commit.
For example, when the CPU memory is small, doing prefetching may result
in file cache thrashing, resulting in slower weight loading time.
Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
* fix add_dummy_requests.
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
* add max_seq_len to eagle3 test and fix add_dummy_requests.
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
* fix prompt_len in add_dummy_requests.
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
* add prepare_resource condition in add_dummy_requests.
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
* add some description of token_nums to add_dummy_requests and fix token_nums in torch compile warmup.
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
* fix available_tokens.
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
---------
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
* Properly get decoding mode according to same logic as cpp.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Cross reference getDecodingMode implementations in pytorch - cpp.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Better bindings for DecodingMode.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Revert to version in main.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Fix.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Revert configuration.py.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
---------
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* disable overlap in encoder
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* feat: invokeGatherBatch
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* feat: overlap same batch
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* chore: add enableTrtOverlap to ExecutorConfig
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* disable overlap for beam search and spec decode
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* skip overlap tests with beam search or speculative decoding
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* moveFinishedContextRequestsToGeneration and skip unfinished requests in updateRequests
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* enable overlap in GptChunkedLongContextTests
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* feat: Enable overlap in gptManagerBenchmark
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* feat: Improve early exit
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Use OptionalRef for newOutputTokens tensor
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* feat: Add overlap scheduling support to TRTLLMDecoder
- Updated TRTLLMDecoder to accept an `enable_overlap_scheduler` parameter.
- Modified the decoder's internal logic to utilize the overlap scheduling feature.
- Adjusted the sequence lengths handling to ensure compatibility with the new scheduling approach.
- Enhanced unit tests to include cases for the overlap scheduler with the TRTLLMDecoder.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* fix: allNewTokens in PP
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
---------
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* Instantiate decoder early to have better mem estimation.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Improve mem estimation by instantiating decoder earlier.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
---------
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Move all casters to customCasters.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Use customCasters in all bindings.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* Added customCasters to userbuffers.
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
---------
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
* support lp in pytorch backend
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
* fix tp
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
---------
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
* Add a new param to LlmRequest and Request to natively support mm
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
* update comment
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
* Update tests to match the new LlmRequest constructor parameters
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
* Modify unitTest and modify mm_embeding's dict name in llama4
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
* Fix based on comments
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
* Fix comment
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
* Fix LlmRequest initialization in kvCacheManagerTest
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
* Clean up code for promt_tuning_config
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
* Clean up prompt_tuning_config in GenerationRequest
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
---------
Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
Co-authored-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>