Commit Graph

417 Commits

Author SHA1 Message Date
William Tambellini
af67bf00a8
feat: register ENABLE_MULTI_DEVICE and ENABLE_UCX as CMake options (#3343)
No change of default value (still ON).
These were hidden cmake vars before that patch.
Fix issue #3289

Signed-off-by: William Tambellini <wtambellini@sdl.com>
Co-authored-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-04-14 10:30:23 +08:00
Chuang Zhu
75e13f4f88
chore: disable some env for disagg defaultly (#3415)
* disable some env for disagg defaultly

Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>

* doc

Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>

* remove

Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>

---------

Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-04-14 10:08:10 +08:00
yuxianq
baeec63dda
refactor: Remove _pp_forward. (#3496)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-14 09:49:44 +08:00
Yiqing Yan
65d1591fbf
Waive L0 test (#3508)
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-04-14 09:32:01 +08:00
Chuang Zhu
6ee021a90d
chore: exchange connection id with tagSend/tagRecv (#3320)
* exchange connection id with tagSend/tagRecv

Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>

* unwaive

Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>

* tag recv/send

Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>

---------

Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-04-14 09:30:34 +08:00
HuiGao-NV
d0f83d19f1
fix: add kv memory size per token of draft model to calculate max number of tokens of kv cache (#3497)
* fix: add kv memory size per token of draft model to calculate max number
of tokens of kv cache

Signed-off-by: Hui Gao 

* Fix code to get model_config of draft model

Signed-off-by: Hui Gao 

---------

Signed-off-by: Hui Gao
2025-04-13 23:02:14 +08:00
Yan Chunwei
b37c5c0a4d
make LLM-API slurm examples executable (#3402)
Signed-off-by: chunweiy <328693+Superjomn@users.noreply.github.com>
2025-04-13 21:42:45 +08:00
Aurelien Chartier
7b38018fa0
feat: Add numNodes to ParallelConfig (#3346)
* Add numNodes to ParallelConfig

If not provided, attempt to find the number of nodes by
adding the number of local ranks 0

Update device IDs check accordingly

Signed-off-by: Aurelien Chartier <achartier@nvidia.com>

* Add ParallelConfig pickle test

Signed-off-by: Aurelien Chartier <achartier@nvidia.com>

---------

Signed-off-by: Aurelien Chartier <achartier@nvidia.com>
2025-04-13 13:55:04 +02:00
dominicshanshan
5d3180be82
feat: Add stress test for TRT-LLM (#3250)
Signed-off-by: Wangshanshan <dominicw@nvidia.com>
2025-04-13 10:24:25 +08:00
Yan Chunwei
74850c61e9
fix: switch ZMQ from file socket to tcp socket in RemoteMpiCommSession (#3462)
* switch ZMQ from file socket to tcp

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* fix comment

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

---------

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-04-13 09:15:55 +08:00
Robin Kobus
ceec4924d9
refactor: batch slot management in decoder classes (#3300)
* refactor: batch slot management in decoder classes

- Changed `forwardBatchSlots` from a single `TensorPtr` to a `std::vector<TensorPtr>` in `decoderBuffers.h` and updated its initialization in `decoderBuffers.cpp`.
- Updated `batchSlots` in `iGptDecoderBatched.h` to a `std::vector<TensorPtr>` for better handling of batch sizes.
- Modified `mBatchSlotsDecoder` in `statefulGptDecoderBatched.h` to use a `std::vector<TensorPtr>` and adjusted its initialization in `statefulGptDecoderBatched.cpp`.
- Ensured proper reshaping of tensors in the setup methods to accommodate the new vector structure.

These changes enhance flexibility in managing tensor buffers across different batch sizes.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Setup batch slots outside of the decoder

- Refactored batch slot management to utilize `makeBatchSlots`, enhancing clarity and functionality in batch processing.
- Introduced `DecoderState` to `MakeDecodingBatchInputOutput` for improved state handling during decoding.
- Updated the `operator()` method to include `decoderState` as a parameter, facilitating better integration with the decoding process.
- Modified related tests to accommodate changes in batch slot handling and ensure proper functionality.

These updates improve the overall structure and efficiency of the decoding process in the batch manager.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Enhance decoder input structure with maxDecodingEngineTokens

- Updated the `Input` class in `iGptDecoderBatched.h` to include a new parameter `maxDecodingEngineTokens` for better control over decoding limits.
- Modified the `MakeDecodingBatchInputOutput` algorithm to compute the maximum number of decoding tokens based on active slots.
- Adjusted the `GptDecoderBatched` class to utilize the new `maxDecodingEngineTokens` parameter, improving clarity in token management during decoding.
- Updated Python bindings to reflect changes in the `Input` class constructor.
- Enhanced tests to ensure proper handling of the new parameter.

These changes improve the flexibility and efficiency of the decoding process in the batch manager.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Streamline decoder input creation and batch slot management

- Introduced a new function `createDecoderInputs` to encapsulate the logic for creating decoder inputs, improving code organization.
- Updated the `operator()` method to utilize the new `createDecoderInputs` function, simplifying the decoding input setup process.
- Removed the `maxOfActiveSlots` template function to streamline the logic for determining the maximum number of active decoding engine tokens.
- Introduced a direct calculation of `maxActiveDecodingEngineTokens` within the `createDecoderInputs` function, enhancing clarity and reducing complexity.

These changes enhance the maintainability and readability of the decoding process in the batch manager.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Update logits handling in decoder batch

- Modified the `decoder_batch::Input` to accept a vector of vectors for logits, enhancing flexibility in tensor management.
- Adjusted the `createDecoderInputs` function to accommodate the new logits structure, ensuring proper batch processing.
- Updated Python bindings to reflect changes in the `Input` class constructor, maintaining compatibility with existing interfaces.
- Refactored the `GptDecoderBatched` and `StatefulGptDecoderBatched` classes to utilize the updated logits structure, improving clarity in tensor slicing and batch size management.
- Enhanced tests to validate the new input structure and ensure correct functionality across various decoding scenarios.

These changes streamline the decoding process and improve the overall maintainability of the codebase.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Rename maxDecodingEngineTokens to maxDecoderSteps

- Updated the `Input` class in `iGptDecoderBatched.h` to rename `maxDecodingEngineTokens` to `maxDecoderSteps` for improved clarity.
- Adjusted the `createDecoderInputs` function to reflect the new naming, ensuring consistency in the decoding process.
- Modified the `GptDecoderBatched` class to utilize `maxDecoderSteps` in its logic, enhancing readability and maintainability.
- Updated Python bindings to expose the renamed parameter, maintaining compatibility with existing interfaces.

These changes enhance the clarity of the decoding parameters and improve the overall structure of the codebase.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: remove usage of `active` vector from prepareForward

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Removed the `active` vector from `decoder_batch::Input`

- Removed the `active` vector from the `Input` class constructor in `iGptDecoderBatched.h`, streamlining the input handling for decoding.
- Updated the `createDecoderInputs` function and related tests to reflect the changes in the `Input` class, ensuring compatibility and maintaining functionality.
- Adjusted Python bindings to accommodate the new constructor signature, enhancing clarity in the interface.

These changes improve the maintainability and readability of the decoding process in the batch manager.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: remove usage of `active` vector from gptDecoderBatchedTest

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Unify the creation of decoder batch inputs in algorithm and tests

- Added a new static method `createDecoderBatchInputs` to streamline the creation of decoder batch inputs, enhancing clarity and maintainability.
- Updated the implementation to utilize active slots directly, simplifying the logic for managing batch slots and logits.
- Refactored the `operator()` method to leverage the new input creation function, ensuring compatibility with existing decoding processes.
- Enhanced tests to validate the new input handling approach, ensuring correct functionality across various scenarios.

These changes improve the overall structure and readability of the decoding process in the batch manager.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: remove usage of active vector from createDecoderBatchInputs

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Update maxDecoderSteps calculation

- Replaced integer division with `common::ceilDiv` for calculating `maxDecoderSteps` and `numDecoderSteps`, ensuring correct handling of token counts.

These changes enhance the robustness of the decoding batch input creation process.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-04-13 05:05:13 +08:00
pcastonguay
145a126a28
chore: Unwaive DS + overlap disagg test (#3339)
* chore: Unwaive DS + overlap disagg test

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

* Fixing pre-commit

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

* Fixing pre-commit

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

---------

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
2025-04-12 13:33:38 -04:00
WeiHaocheng
c6081abb0e
feat: Make scaffolding Controller more generic #3408 (#3416)
Signed-off-by: fredw (generated by with_the_same_user script) <20514172+WeiHaocheng@users.noreply.github.com>
2025-04-12 21:35:38 +08:00
QI JUN
012fb9a1c4
remove useless max_num_tokens member in PyTorchConfig (#3493)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-12 21:09:58 +08:00
Robin Kobus
2ab71f9a80
refactor: decoder buffers (#3307)
* refactor: remove cumLogProbs and logProbs from DecoderBuffers

- Eliminated cumLogProbs and logProbs from DecoderBuffers, streamlining the buffer management.
- Updated related code in decoderBuffers.cpp and bindings.cpp to reflect these changes, ensuring that only host pointers are used for log probabilities.

These modifications enhance code clarity and maintainability by reducing redundancy in buffer management.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: streamline sequence length handling in GptDecoderBatched and StatefulGptDecoderBatched

- Updated GptDecoderBatched to directly use output.sequenceLengths for lengths assignment, removing unnecessary reshaping.
- Adjusted StatefulGptDecoderBatched to ensure sequence lengths are correctly shaped based on actual batch size and max beam width.

These changes enhance clarity and maintainability in the decoding process.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: integrate DecoderState for sequence length management in decoding process

- Updated DecoderBuffers to remove direct handling of sequence lengths, now utilizing DecoderState for this purpose.
- Adjusted MakeDecodingBatchInputOutput to accept DecoderState, enhancing clarity in the decoding input/output management.
- Refactored GptDecoderBatched and StatefulGptDecoderBatched to streamline sequence length handling, ensuring consistency across the decoding workflow.

refactor: update SlotDecoderBuffers to manage sequence lengths directly

- Introduced sequenceLengths and sequenceLengthsHost to SlotDecoderBuffers for better management of sequence lengths.
- Refactored asyncSend and recv methods to utilize the new sequenceLengths member, enhancing clarity and reducing redundancy.
- Updated TrtGptModelInflightBatching to align with the new structure, ensuring consistent handling of sequence lengths across the decoding process.

These changes improve maintainability and streamline the decoding workflow.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Delegate to asyncSend method in SlotDecoderBuffers

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-04-12 11:41:24 +02:00
yuxianq
29c5085400
fix: Fix PP for llama. (#3449)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-12 17:20:27 +08:00
Robin Kobus
1bd84c6d8c
feat: Allow individual gatherContext for each additional output (#3374)
* refactor: Update ExecutorConfig to use AdditionalModelOutput type

- Changed function signatures and member variables across multiple files to replace std::optional<std::vector<std::string>> with std::optional<std::vector<executor::AdditionalModelOutput>> to include gatherContext flag for each additional output.
- Updated related serialization and deserialization methods to accommodate the new type.
- Adjusted tests to reflect the changes in the output handling structure.

This refactor enhances the flexibility and maintainability of the output configuration in the executor and batch manager components.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Remove equality operator from TrtGptModelOptionalParams

- Deleted the operator== implementation from TrtGptModelOptionalParams to simplify the class.
- Updated the pybind11 bindings to remove the exposure of the equality operator to Python.

This change streamlines the class definition and reduces unnecessary complexity in the bindings.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Enhance copyAdditionalOutputs to utilize AdditionalModelOutput

- Updated the copyAdditionalOutputs function to accept a vector of AdditionalModelOutput, allowing for the inclusion of the gatherContext flag.
- Adjusted the logic to handle context and non-context outputs separately, improving the output handling mechanism.
- Modified related unit tests to incorporate the new gatherContext parameter, ensuring comprehensive testing of the updated functionality.

This refactor improves the flexibility and clarity of output management in the batch processing workflow.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Introduce findOutputTensor utility function for output tensor retrieval

- Added a new utility function, findOutputTensor, to encapsulate the logic for finding output tensors and checking their validity.
- Refactored copyAdditionalOutputs to utilize findOutputTensor, reducing code duplication and improving clarity.
- Enhanced error checking for additional context and generation output tensors.

This change streamlines the output tensor retrieval process, enhancing maintainability and readability in the batch processing workflow.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* refactor: Check final indices of additional output tensors and update tests

- Added checks to verify the final indices of additional output tensors for context and generation outputs.
- Updated unit tests to verify the changes.
  - Add lastTokenIds input tensor to test engines.
  - Logits output depends on gatherContextLogits parameter.
- Removed gatherContextOutputs parameter from the validate method in LlmRequest.
  - Context outputs do not depend on computeContextLogits parameter.

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* fixup! refactor: Check final indices of additional output tensors and update tests

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* fixup! refactor: Update ExecutorConfig to use AdditionalModelOutput type

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* fixup! refactor: Remove equality operator from TrtGptModelOptionalParams

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* docs: Update executor.md

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* chore: Clean up includes

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-04-12 17:00:36 +08:00
nv-guomingz
adf60a8723
fix:update the default excluded_modules value for fp8rowwise recipe. (#3477)
Signed-off-by: nv-guomingz <37257613+nv-guomingz@users.noreply.github.com>
2025-04-12 16:00:21 +08:00
hlu1
4855431d3d
[Deepseek] Redesign multi-stream API (#3459)
Signed-off-by: Hao Lu <haolu@nvidia.com>
2025-04-11 23:00:25 -07:00
Iman Tabrizian
3041bbdab3
fix: Fix disagg MTP with overlap (#3406)
* fix: disagg overlap with MTP

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Review comment

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

---------

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>
2025-04-12 12:27:24 +08:00
HuiGao-NV
c51e90d7d7
fix: don't perform memory estimation for start_attention (#3485)
* fix: don't perform memory estimation for start_attention

* Enable tests of unittest/_torch/multi_gpu

Signed-off-by: Hui Gao <huig@nvidia.com>
2025-04-12 11:34:46 +08:00
Enwei Zhu
5e2923bb92
test: Automatically clean checkpoints and engines (#3468)
* auto clean

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

* fix tempdir

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>

---------

Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-04-12 09:56:29 +08:00
Iman Tabrizian
c539750d42
fix: Allow context_and_generation request type in disagg overlap (#3489)
Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>
2025-04-11 16:15:01 -07:00
QI JUN
d167cbd5bb
refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370)
* remove tensorrt_llm._torch.distributed.ParallelConfig

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* clean

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix embedding test

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix comments

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* polish

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* rebase

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

---------

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Co-authored-by: hlu1 <14827759+hlu1@users.noreply.github.com>
2025-04-11 15:34:20 -07:00
Enwei Zhu
cf9ceea890
test: Add DeepSeek-V3-Lite PP=4 cases (#3454)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-04-12 00:09:12 +08:00
Fridah-nv
ec723fa993
feat:[AutoDeploy] Enhance RoPE support (#3115)
* add test to map flashinfer rope op with triton custom rope ops and pytorch rope in fused_mha

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* add rope matcher and unit tests

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* capture cos and sin from graph

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* revert fuse_mha op change

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor update to address comment and remove redundant unit test

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* move view and transpose into graph nodes and update unit test to test custom op directly

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* move view into custom op, update bfs with bound, update custom op return type to be half precision

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* custom op update to support 3D input

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* handle bnsd and bsnd format, update tests, handle 3D cos/sin input to the custom op

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* add llama4 rope test, update custom op with is_neox flag

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* add llama4 style rope to matcher and update unit test

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* separate into two transformations

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* fix when num_head != num_kv_head; add support for cached position_ids and cos_sin_cache in graph; update unit tests

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor update, cache locally and propagate meta info of qk nodes

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor: fix cos_sin_cache not float

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor: move cache into matcher

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

---------

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
2025-04-11 23:51:24 +08:00
Bo Li
11b0091863
docs: Update perf-benchmarking doc on GPU configuration for consistent benchmarking. (#3458)
Signed-off-by: Bo Li <bobboli0202@gmail.com>
2025-04-11 17:21:27 +02:00
Robin Kobus
aeecdb0ab9
fix: Eagle decoding (#3456)
* fix: eagle packAcceptedPaths

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* test: Add wavefront tests for Eagle

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-04-11 22:06:38 +08:00
Yukun He
ff82aef99b
Fix the issues related to fused moe path. (#3435)
* One of the tactic is not supported during dispatch.
* final_hidden_states should be unpacked if it is not min_latency_mode.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-04-11 21:41:15 +08:00
liji-nv
b168adba70
feat: Add NVFP4 UB pattern optimization pass in torch compile (#3371)
* feat: Add NVFP4 UB pattern optimization pass in torch compile

* Add an additional flag for UB fp4 pattern to avoid inverse the scale
* Add NVFP4 related UB patterns

Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>

* Update atol, some points fails for B200 umbriel.

Signed-off-by: liji-nv <59594262+liji-nv@users.noreply.github.com>

---------

Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Signed-off-by: liji-nv <59594262+liji-nv@users.noreply.github.com>
2025-04-11 21:25:29 +08:00
Shunkangz
ea050084ad
feat: Add support of chat completion in PD (#2985)
* Add support of chat completion in PD

Add support of include_usage in PD


Reformat


* Remove redundant code

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

* Refactor code

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

* Add chat completion test

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

* Refactor code

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

---------

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
2025-04-11 17:53:28 +08:00
Yechan Kim
5bc6f093c8
fix: mllama e2e pytorch flow fix (#3397)
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
2025-04-11 17:33:15 +08:00
Ivy Zhang
20e54e5c89
test: add cuda visible device constraint for phi_1gpu test (#3364)
Signed-off-by: Ivy Zhang <yanzh@nvidia.com>
2025-04-11 17:14:52 +08:00
Ivy Zhang
d998832b33
test: add torch flow test case in qa test list (#3404)
Signed-off-by: Ivy Zhang <yanzh@nvidia.com>
2025-04-11 16:57:41 +08:00
pansicheng
143edc8153
fix partialMatch (#3413)
Signed-off-by: pansicheng <sicheng.pan.chn@gmail.com>
2025-04-11 16:42:52 +08:00
Yiqing Yan
0d351317c2
Waive failure post-merge tests (#3472)
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-04-11 16:23:07 +08:00
Yuan Tong
a139eae425
chore: Stabilize ABI boundary for internal kernel library (#3117)
chore: Stabilize ABI boundary for internal kernel library

Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-04-11 15:07:50 +08:00
Enwei Zhu
410f56357e
test: Waive torch compile tests (#3471)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-04-11 13:38:05 +08:00
QI JUN
16ca45747b
always trigger multi gpu test to protect modeling_llama.py and modeling_deepseekv3.py (#3434)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-11 13:19:23 +08:00
wili
5142c783c0
fix: Beam Search Diversity (#3375)
Signed-off-by: wili-65535 <wili-65535@user.noreply.github.com>
Co-authored-by: wili-65535 <wili-65535@user.noreply.github.com>
2025-04-11 11:58:59 +08:00
QI JUN
1e2a339642
waive unittest/_torch/multi_gpu (#3464)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-11 09:59:16 +08:00
QI JUN
6cef10068a
waive a test case of llama 3.1 with torch compile (#3461)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-11 09:15:19 +08:00
tburt-nv
5616c0d232
add precommit check to github actions (#3129)
Signed-off-by: Tyler Burt <195370667+tburt-nv@users.noreply.github.com>
2025-04-11 06:40:53 +08:00
Dom Brown
a8310b01dc
feat: trtllm-gen fp4 GEMM for pytorch workflow (#3423)
* feat: trtllm-gen fp4 GEMM

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>

* Clean up

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>

* Remove incorrect header

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>

* Reviewer comment

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>

---------

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-04-11 02:28:07 +08:00
Iman Tabrizian
d7f45e50c6
test: disable attention DP tests for single GPU (#3395)
Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>
2025-04-11 01:38:17 +08:00
Zhihan Jiang
8300218d21
feat: support llama4 nope layers; support FP8 checkpoint loading; (#3382)
* Enable NOPE, Fix a rotary embedding bug for gptj_stype_rope, Address PR comment, Properly skip the rotary_embdding for Llama4 ROPE layers

* Add support for FP8 checkpoint, Fix ckpt weighting loading for FP8

* Temporarily disable min_latency_mode for llama4

---------

Co-authored-by: Yilin Fan <yilinf@nvidia.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
2025-04-10 10:16:42 -07:00
amitz-nv
a6a2ae6cc1
chore: Rename nvsmall to nemotron nas (#3447)
* Rename nvsmall to nemotron NAS

* Revert nvsmall to nemotron_nas rename in paths in tests that access llm_models_root/nvsmall/tests

* Add NemotronNAS to pytorch supported models table

Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
2025-04-10 23:16:52 +08:00
wm2012011492
af05749e90
feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369)
* add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa_llmapi.py

Signed-off-by: mengw <12670782+wm2012011492@users.noreply.github.com>

* fix coding style

Signed-off-by: mengw <12670782+wm2012011492@users.noreply.github.com>

* add unittest

Signed-off-by: mengw <12670782+wm2012011492@users.noreply.github.com>

---------

Signed-off-by: mengw <12670782+wm2012011492@users.noreply.github.com>
Co-authored-by: mengw <12670782+wm2012011492@users.noreply.github.com>
2025-04-10 22:45:57 +08:00
QI JUN
f5281fffaa
waive some test cases of test_llm_multi_gpu.py (#3452)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-10 22:02:35 +08:00
Yan Chunwei
c5e803ba48
chore: code cleanup for error logging and SharedMemory in proxy.py (#3432)
* cleanup log

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* remove shared-memory

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* remove ExecutorResponse

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* add assert for postproc

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

---------

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-04-10 21:57:06 +08:00