Because it is duplicated with test_fp4_linear. Also, cpp profiler has been unified with the new AutoTuner already.
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
* doc: Update doc to enable FP8 MLA for Deepseek.
Signed-off-by: Bo Li <bobboli0202@gmail.com>
* Update.
Signed-off-by: Bo Li <bobboli0202@gmail.com>
* Update.
Signed-off-by: Bo Li <bobboli0202@gmail.com>
* Update the status on Hopper and Blackwell.
Signed-off-by: Bo Li <bobboli0202@gmail.com>
* Update.
Signed-off-by: Bo Li <bobboli0202@gmail.com>
* Update table of contents.
Signed-off-by: Bo Li <bobboli0202@gmail.com>
---------
Signed-off-by: Bo Li <bobboli0202@gmail.com>
Co-authored-by: bhsueh_NV <11360707+byshiue@users.noreply.github.com>
* add pip scripts dir to path
* move nvrtc_wrapper to conan
* support building nvrtc wrapper from source
---------
Signed-off-by: Tyler Burt <195370667+tburt-nv@users.noreply.github.com>
* chore: unify pp_layers helpers
Fix assumptions about equal number of layers per PP rank
in prepare_attention_inputs
Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
* apply a tenative fix to moe bypass kernel update
* Pass none to disable final stage in moe
Co-authored-by: hlu1 <14827759+hlu1@users.noreply.github.com>
Signed-off-by: Chang Liu <lc9114@gmail.com>
---------
Signed-off-by: Chang Liu <lc9114@gmail.com>
Co-authored-by: hlu1 <14827759+hlu1@users.noreply.github.com>
* add dgx_h200 tests
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
* test
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
* fix pre-commit
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
* fix
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
* fix
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
* change bsl branch
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
* fix
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
* change multi gpu related file list
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
---------
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
* fix: Fixing issue with first gen token being returned twice with streaming
Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
* Fixing not_expectring_strings in test
Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
---------
Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
No change of default value (still ON).
These were hidden cmake vars before that patch.
Fix issue #3289
Signed-off-by: William Tambellini <wtambellini@sdl.com>
Co-authored-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
* fix: add kv memory size per token of draft model to calculate max number
of tokens of kv cache
Signed-off-by: Hui Gao
* Fix code to get model_config of draft model
Signed-off-by: Hui Gao
---------
Signed-off-by: Hui Gao
* Add numNodes to ParallelConfig
If not provided, attempt to find the number of nodes by
adding the number of local ranks 0
Update device IDs check accordingly
Signed-off-by: Aurelien Chartier <achartier@nvidia.com>
* Add ParallelConfig pickle test
Signed-off-by: Aurelien Chartier <achartier@nvidia.com>
---------
Signed-off-by: Aurelien Chartier <achartier@nvidia.com>
* refactor: batch slot management in decoder classes
- Changed `forwardBatchSlots` from a single `TensorPtr` to a `std::vector<TensorPtr>` in `decoderBuffers.h` and updated its initialization in `decoderBuffers.cpp`.
- Updated `batchSlots` in `iGptDecoderBatched.h` to a `std::vector<TensorPtr>` for better handling of batch sizes.
- Modified `mBatchSlotsDecoder` in `statefulGptDecoderBatched.h` to use a `std::vector<TensorPtr>` and adjusted its initialization in `statefulGptDecoderBatched.cpp`.
- Ensured proper reshaping of tensors in the setup methods to accommodate the new vector structure.
These changes enhance flexibility in managing tensor buffers across different batch sizes.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Setup batch slots outside of the decoder
- Refactored batch slot management to utilize `makeBatchSlots`, enhancing clarity and functionality in batch processing.
- Introduced `DecoderState` to `MakeDecodingBatchInputOutput` for improved state handling during decoding.
- Updated the `operator()` method to include `decoderState` as a parameter, facilitating better integration with the decoding process.
- Modified related tests to accommodate changes in batch slot handling and ensure proper functionality.
These updates improve the overall structure and efficiency of the decoding process in the batch manager.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Enhance decoder input structure with maxDecodingEngineTokens
- Updated the `Input` class in `iGptDecoderBatched.h` to include a new parameter `maxDecodingEngineTokens` for better control over decoding limits.
- Modified the `MakeDecodingBatchInputOutput` algorithm to compute the maximum number of decoding tokens based on active slots.
- Adjusted the `GptDecoderBatched` class to utilize the new `maxDecodingEngineTokens` parameter, improving clarity in token management during decoding.
- Updated Python bindings to reflect changes in the `Input` class constructor.
- Enhanced tests to ensure proper handling of the new parameter.
These changes improve the flexibility and efficiency of the decoding process in the batch manager.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Streamline decoder input creation and batch slot management
- Introduced a new function `createDecoderInputs` to encapsulate the logic for creating decoder inputs, improving code organization.
- Updated the `operator()` method to utilize the new `createDecoderInputs` function, simplifying the decoding input setup process.
- Removed the `maxOfActiveSlots` template function to streamline the logic for determining the maximum number of active decoding engine tokens.
- Introduced a direct calculation of `maxActiveDecodingEngineTokens` within the `createDecoderInputs` function, enhancing clarity and reducing complexity.
These changes enhance the maintainability and readability of the decoding process in the batch manager.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Update logits handling in decoder batch
- Modified the `decoder_batch::Input` to accept a vector of vectors for logits, enhancing flexibility in tensor management.
- Adjusted the `createDecoderInputs` function to accommodate the new logits structure, ensuring proper batch processing.
- Updated Python bindings to reflect changes in the `Input` class constructor, maintaining compatibility with existing interfaces.
- Refactored the `GptDecoderBatched` and `StatefulGptDecoderBatched` classes to utilize the updated logits structure, improving clarity in tensor slicing and batch size management.
- Enhanced tests to validate the new input structure and ensure correct functionality across various decoding scenarios.
These changes streamline the decoding process and improve the overall maintainability of the codebase.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Rename maxDecodingEngineTokens to maxDecoderSteps
- Updated the `Input` class in `iGptDecoderBatched.h` to rename `maxDecodingEngineTokens` to `maxDecoderSteps` for improved clarity.
- Adjusted the `createDecoderInputs` function to reflect the new naming, ensuring consistency in the decoding process.
- Modified the `GptDecoderBatched` class to utilize `maxDecoderSteps` in its logic, enhancing readability and maintainability.
- Updated Python bindings to expose the renamed parameter, maintaining compatibility with existing interfaces.
These changes enhance the clarity of the decoding parameters and improve the overall structure of the codebase.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: remove usage of `active` vector from prepareForward
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Removed the `active` vector from `decoder_batch::Input`
- Removed the `active` vector from the `Input` class constructor in `iGptDecoderBatched.h`, streamlining the input handling for decoding.
- Updated the `createDecoderInputs` function and related tests to reflect the changes in the `Input` class, ensuring compatibility and maintaining functionality.
- Adjusted Python bindings to accommodate the new constructor signature, enhancing clarity in the interface.
These changes improve the maintainability and readability of the decoding process in the batch manager.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: remove usage of `active` vector from gptDecoderBatchedTest
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Unify the creation of decoder batch inputs in algorithm and tests
- Added a new static method `createDecoderBatchInputs` to streamline the creation of decoder batch inputs, enhancing clarity and maintainability.
- Updated the implementation to utilize active slots directly, simplifying the logic for managing batch slots and logits.
- Refactored the `operator()` method to leverage the new input creation function, ensuring compatibility with existing decoding processes.
- Enhanced tests to validate the new input handling approach, ensuring correct functionality across various scenarios.
These changes improve the overall structure and readability of the decoding process in the batch manager.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: remove usage of active vector from createDecoderBatchInputs
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
* refactor: Update maxDecoderSteps calculation
- Replaced integer division with `common::ceilDiv` for calculating `maxDecoderSteps` and `numDecoderSteps`, ensuring correct handling of token counts.
These changes enhance the robustness of the decoding batch input creation process.
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
---------
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>