* Restore per-channel pre-quant
Signed-off-by: Barry Kang <43644113+Barry-Delaney@users.noreply.github.com>
* Update TRT test script
Signed-off-by: Barry Kang <43644113+Barry-Delaney@users.noreply.github.com>
* Fix pre-commit
Signed-off-by: Barry Kang <43644113+Barry-Delaney@users.noreply.github.com>
---------
Signed-off-by: Barry Kang <43644113+Barry-Delaney@users.noreply.github.com>
* Unwaive test for Qwen model.
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
* update.
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
---------
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
* extend pyt nano tests perf coverage
Signed-off-by: Venky <23023424+venkywonka@users.noreply.github.com>
* explicitly set maxnt for some cases
This is because the test harness default to no prefill chunking, that means the isl specified is the true context.
When explicitly unspecified in the test harness, the `maxnt` passed down to `trtllm-bench` is 2048.
This means trtllm-bench gets conflicting inputs when isl>2048 but maxnt=2048; hence overriding maxnt to be consistent with isl for such cases.
Signed-off-by: Venky <23023424+venkywonka@users.noreply.github.com>
---------
Signed-off-by: Venky <23023424+venkywonka@users.noreply.github.com>
Fixed the mrope argument missing issue in the summary tasks for Qwen models.
And re-enabled the fixed tests.
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Chore: waive torch compile test cases of deepseek v3 lite (#4508)
waive torch compile test cases of deepseek v3 lite
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
* [AutoDeploy] HF factory improvements
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
* improve monkey-patches and add unit tests
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
---------
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
* add cases for rtx_pro_6000 and update test filter
Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com>
* amend a typo in model llama_v3.1_405b_instruct fp4 and add more cases for rtx pro 6000 and waive_list
Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com>
---------
Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
This PR adds a customized allreduce to TensorRT-LLM. The new allreduce is used for communication on PCIe-based GPUs via low-precision quantization, which can accelerate the PCIe allreduce process.
Signed-off-by: Hui Kang <hkang@nvidia.com>
Co-authored-by: Hui Kang <hkang@nvidia.com>
* Add llama4 disagg accuracy tests
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
* Make it async and add GSM8K benchmark
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
---------
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
* Fix padded vocab size for Llama
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Refactor multi GPU llama executor tests, and reuse the built model engines
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Fix test list typo
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* WIP
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Further WIP
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* WIP
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Update test lists and readme
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Try parametrize for asymmetric
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
* Parametrize + skip unsupported combinations
Signed-off-by: domb <3886319+DomBrown@users.noreply.github.com>
* Update test list
Signed-off-by: domb <3886319+DomBrown@users.noreply.github.com>
* Reduce environment duplicated code
Signed-off-by: domb <3886319+DomBrown@users.noreply.github.com>
---------
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
Signed-off-by: domb <3886319+DomBrown@users.noreply.github.com>
* Deduce default max_tokens for trtllm-serve
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
* Improve executor_config.max_seq_len assignment in TRT workflow
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
* Enhance error message
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
* Add deduced max_tokens test
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
---------
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
* [Docs] - Some clean-up for the docs
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
* [Infra] - Some clean-up for the CI pipeline
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
---------
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
* add ll-nm-nano tests that map to nim requirements
Signed-off-by: Venky <23023424+venkywonka@users.noreply.github.com>
* prune some pytorch cases (fp8)
Signed-off-by: Venky <23023424+venkywonka@users.noreply.github.com>
* removing pyt backend test changes
- When validating the pytorch tests with the isl/osl/conc/quant settings (that is done for cpp backend too), seeing hangs that need further debugging.
- Therefore don't want to block this PR, hence removing them.
- Seeing
Signed-off-by: Venky <23023424+venkywonka@users.noreply.github.com>
---------
Signed-off-by: Venky <23023424+venkywonka@users.noreply.github.com>
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com@users.noreply.github.com>
Co-authored-by: Hao Lu <14827759+hlu1@users.noreply.github.com@users.noreply.github.com>
* add docstring to summarize current rope support
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* minor: replace call_method, adjust inserting order of cos_sin_cache calculation node
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* add unit test for triton rope and ds rope
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* update rope matcher to match DS RoPE, add custom op for reference, add unit test case
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* cache cos[pos_idx].unsqueeze and sin[pos_idxs].unsqueeze
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* minor doc update
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* separate pattern matching and optimization for explicit and complex rope + minor updates
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* clean rope impl in repo
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* replace fused_flattened_mla_with_cache's rope impl with torch_apply_rope_with_qk_interleaving, update unit test
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* minor
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* separate layout infer and transpose to a new transformation
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* update rope_with_explicit_freqs and rope_with_input_interleaved to expose unsqueeze_dim and support match_rope_layout, add unit tests
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* solve merge conflict in transform.py, need to fix optimize_rope with cuda graph capture
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* minor clean up after rebase
Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>
* fix pre-commit
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* support map to bnsd layout and infer unsqueeze_dim from op
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* fix cos/sin not the same across prompts in the same batch issue when mapping to flashinfer op
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* fix for unit test
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* fix custom op input/output node ordering issue for DeepSeek V3 rope
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* clean code
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* minor
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* move flattening of cos_sin_cache to the graph, update flashinfer op docstring and test
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
* debug transform unit test failure
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
---------
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>