(1) match quant exclude modules names to TRTLLM names
(2) No need for any special weight loading for quantization scales weights (#3891)
Signed-off-by: Tomer Asida <57313761+tomeras91@users.noreply.github.com>
* add parallel_q_b_proj_and_concat
Signed-off-by: junliu <65336694+hello-11@users.noreply.github.com>
* code cleanup
Signed-off-by: junliu <65336694+hello-11@users.noreply.github.com>
* one gemm/concat and then split the latent_cache and pass them separately to context/gen
Signed-off-by: junliu <65336694+hello-11@users.noreply.github.com>
---------
Signed-off-by: junliu <65336694+hello-11@users.noreply.github.com>
test: add test cases for 0.19 release (#3608)
* fix test name
* add quickstart test for nemotron-ultra
* add rcca multi-node test case for deepseek-v3
* add rcca info
---------
squash (#3642)
fix: nvbugs/5187237: fix deterministic mode crash (#3448)
* nvbugs/5187237 nvbugs/5112075: fix deterministic mode error
* remove waive
* Revert "remove waive"
This reverts commit 0bf5486d19906d692bfb7a6262333c296b0087ac.
* revert ar fusion
---------
update fp8 doc (#3647)
tests: change qa perf test to trtllm-bench (#3619)
fix: FP8 quantized lm_head (NvBug 5214229) (#3567)
infra: Add PR approval protection for the release branch (#3634)
fix: nvbugs/5231298: pytorch allreduce issue (#3673)
Fix: nvbugs/5222698 variable not defined (#3630)
* Fix: nvbugs/5222698 variable not defined
* Tidy code
---------
test:sync waives.txt from main branch by disabling test_perf/gpt_350m-cppmanager case (#3685)
test:restore fp8 kv cache testing for L0 (#3671)
doc: Update DeepSeek perf docs (#3693)
* Update DeepSeek perf docs
* update
* Apply suggestions from code review
---------
tests: waive test_llm_multi_node (#3664)
fix: update test_user_buffers_mm_add_prologue atol (#3711)
Fix: cherry-pick hmac encryption from main branch (#3635)
* security fix cherry-pick changes from main
* fix hmac in remote mpi session (#3649)
---------
Un-waive DS-V3-Lite tests. (#3621)
fix: FP8 kv accuracy (#3675)
* fix FP8 kv accuracy
* update doc
---------
Fix script options for engines. (#3622)
unwaive multi-node test (#3721)
chore : Split more tests out of gpt tests (#3524) (#3674)
doc:add torch examples link into torch backend documentation (#3749)
test: Get Eagle tests working (#3593) (#3722)
Waive L0 test (#3756)
waive failed case in perf test, change default max_batch_size to 512 and write config.json to output log (#3656)
Update ds v3 parameters in stress test. (#3676)
waive gemma on L20 (#3766)
https://nvbugs/5141291: Fix convert.py script for Qwen model. (#3758)
Include Qwen2VLDecoderLayer in the smooth_qwen2_model function.
fix: PP4 fixes and cleanup (#3688)
remove benchmark test list (#3643)
skip disagg deepseek test if sm!=90 (#3720)
test: skip failed cases on B200 (#3710)
* add skip condition to tests
* fix error
---------
test: [nvbug: 5234494] skip_pre_ada for fp8 cases (#3718)
* skip_pre_ada for fp8 cases
* update
* update after rebase
---------
add know issue to deepseek doc. (#3800)
Fix ModelOpt Mixtral AWQ OOM (#3714) (#3761)
Waive L0 tests (#3826)
fix: Reduce memory usage in fused moe op associated with AutoTuning and fix moe fallback issue. (#3793)
* Reduce memory usage in fused moe op associated with AutoTuning.
* Replace pre-defined bucket size strategy with a generating function based on the tune_max_num_tokens.
* Add free_memory logic of workspace in min_latency_mode fused moe path.
* Fix fused_moe fallback issue. (#3652)
min_latency_mode is only set to False during warmup phase. Thus when it becomes true during inference, all tactics fall back to the default one and thus cause perf regression.
---------
[doc] Better document for Draft-Target-Model (DTM) speculative decoding (#3797)
Fix pre-commit
Fix again
Address some review comments for the MI
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
* Update gen tps calculation.
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
* Add back output speed for comparison.
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
* Fix issue with f-string.
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
* Fix some spacing.
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
* Replace output speed with per-request genphase tput.
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
* Add gen TPS breakdown.
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
* Update some tagging.
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
---------
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com@users.noreply.github.com>
Co-authored-by: Hao Lu <14827759+hlu1@users.noreply.github.com@users.noreply.github.com>
* fix bug of create cuda stream as default parameter which will be initialized during importing
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
* add torch.cuda.Stream() for the leader node
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
* fix pre-commit issue
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
---------
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
* infra: install Triton in the base image
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
* install Triton from the base image
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
* update base image
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
* Address review comments
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
* update base image
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
* waive test
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
---------
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
* Remote results.xml when no cases ran
Signed-off-by: qqiao <qqiao@nvidia.com>
* Change some test config to verify
Signed-off-by: qqiao <qqiao@nvidia.com>
* Update for quotes
Signed-off-by: qqiao <qqiao@nvidia.com>
* Move the remove results.xml in catch section
Signed-off-by: qqiao <qqiao@nvidia.com>
* Add missed path
Signed-off-by: qqiao <qqiao@nvidia.com>
* Change back the test stage setting
Signed-off-by: qqiao <qqiao@nvidia.com>
---------
Signed-off-by: qqiao <qqiao@nvidia.com>