* refactor: remove cumLogProbs and logProbs from DecoderBuffers
- Eliminated cumLogProbs and logProbs from DecoderBuffers, streamlining the buffer management.
- Updated related code in decoderBuffers.cpp and bindings.cpp to reflect these changes, ensuring that only host pointers are used for log probabilities.
These modifications enhance code clarity and maintainability by reducing redundancy in buffer management.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: streamline sequence length handling in GptDecoderBatched and StatefulGptDecoderBatched
- Updated GptDecoderBatched to directly use output.sequenceLengths for lengths assignment, removing unnecessary reshaping.
- Adjusted StatefulGptDecoderBatched to ensure sequence lengths are correctly shaped based on actual batch size and max beam width.
These changes enhance clarity and maintainability in the decoding process.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: integrate DecoderState for sequence length management in decoding process
- Updated DecoderBuffers to remove direct handling of sequence lengths, now utilizing DecoderState for this purpose.
- Adjusted MakeDecodingBatchInputOutput to accept DecoderState, enhancing clarity in the decoding input/output management.
- Refactored GptDecoderBatched and StatefulGptDecoderBatched to streamline sequence length handling, ensuring consistency across the decoding workflow.
refactor: update SlotDecoderBuffers to manage sequence lengths directly
- Introduced sequenceLengths and sequenceLengthsHost to SlotDecoderBuffers for better management of sequence lengths.
- Refactored asyncSend and recv methods to utilize the new sequenceLengths member, enhancing clarity and reducing redundancy.
- Updated TrtGptModelInflightBatching to align with the new structure, ensuring consistent handling of sequence lengths across the decoding process.
These changes improve maintainability and streamline the decoding workflow.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Delegate to asyncSend method in SlotDecoderBuffers
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
---------
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Update ExecutorConfig to use AdditionalModelOutput type
- Changed function signatures and member variables across multiple files to replace std::optional<std::vector<std::string>> with std::optional<std::vector<executor::AdditionalModelOutput>> to include gatherContext flag for each additional output.
- Updated related serialization and deserialization methods to accommodate the new type.
- Adjusted tests to reflect the changes in the output handling structure.
This refactor enhances the flexibility and maintainability of the output configuration in the executor and batch manager components.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Remove equality operator from TrtGptModelOptionalParams
- Deleted the operator== implementation from TrtGptModelOptionalParams to simplify the class.
- Updated the pybind11 bindings to remove the exposure of the equality operator to Python.
This change streamlines the class definition and reduces unnecessary complexity in the bindings.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Enhance copyAdditionalOutputs to utilize AdditionalModelOutput
- Updated the copyAdditionalOutputs function to accept a vector of AdditionalModelOutput, allowing for the inclusion of the gatherContext flag.
- Adjusted the logic to handle context and non-context outputs separately, improving the output handling mechanism.
- Modified related unit tests to incorporate the new gatherContext parameter, ensuring comprehensive testing of the updated functionality.
This refactor improves the flexibility and clarity of output management in the batch processing workflow.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Introduce findOutputTensor utility function for output tensor retrieval
- Added a new utility function, findOutputTensor, to encapsulate the logic for finding output tensors and checking their validity.
- Refactored copyAdditionalOutputs to utilize findOutputTensor, reducing code duplication and improving clarity.
- Enhanced error checking for additional context and generation output tensors.
This change streamlines the output tensor retrieval process, enhancing maintainability and readability in the batch processing workflow.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Check final indices of additional output tensors and update tests
- Added checks to verify the final indices of additional output tensors for context and generation outputs.
- Updated unit tests to verify the changes.
- Add lastTokenIds input tensor to test engines.
- Logits output depends on gatherContextLogits parameter.
- Removed gatherContextOutputs parameter from the validate method in LlmRequest.
- Context outputs do not depend on computeContextLogits parameter.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* fixup! refactor: Check final indices of additional output tensors and update tests
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* fixup! refactor: Update ExecutorConfig to use AdditionalModelOutput type
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* fixup! refactor: Remove equality operator from TrtGptModelOptionalParams
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* docs: Update executor.md
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* chore: Clean up includes
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
---------
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* add test to map flashinfer rope op with triton custom rope ops and pytorch rope in fused_mha
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* add rope matcher and unit tests
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* capture cos and sin from graph
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* revert fuse_mha op change
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* minor update to address comment and remove redundant unit test
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* move view and transpose into graph nodes and update unit test to test custom op directly
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* move view into custom op, update bfs with bound, update custom op return type to be half precision
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* custom op update to support 3D input
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* handle bnsd and bsnd format, update tests, handle 3D cos/sin input to the custom op
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* add llama4 rope test, update custom op with is_neox flag
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* add llama4 style rope to matcher and update unit test
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* separate into two transformations
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* fix when num_head != num_kv_head; add support for cached position_ids and cos_sin_cache in graph; update unit tests
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* minor update, cache locally and propagate meta info of qk nodes
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* minor: fix cos_sin_cache not float
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* minor: move cache into matcher
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
---------
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* One of the tactic is not supported during dispatch.
* final_hidden_states should be unpacked if it is not min_latency_mode.
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
* feat: Add NVFP4 UB pattern optimization pass in torch compile
* Add an additional flag for UB fp4 pattern to avoid inverse the scale
* Add NVFP4 related UB patterns
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
* Update atol, some points fails for B200 umbriel.
Signed-off-by: liji-nv <59594262+liji-nv@users.noreply.github.com>
---------
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Signed-off-by: liji-nv <59594262+liji-nv@users.noreply.github.com>
* feat: trtllm-gen fp4 GEMM
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Clean up
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Remove incorrect header
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Reviewer comment
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
---------
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Rename nvsmall to nemotron NAS
* Revert nvsmall to nemotron_nas rename in paths in tests that access llm_models_root/nvsmall/tests
* Add NemotronNAS to pytorch supported models table
Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
* Update some description which is out of date
Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>
* Apply suggestions from code review
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
---------
Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
- Added a new entry in the README for the published benchmarking best practices for DeepSeek-R1.
- Introduced a new blog post detailing performance benchmarking configurations and procedures for DeepSeek-R1 in TensorRT-LLM, including installation, dataset preparation, and benchmarking steps for both B200 and H200 GPUs.
Signed-off-by: taoli <litaotju@users.noreply.github.com>
Co-authored-by: taoli <litaotju@users.noreply.github.com>