Commit Graph

38 Commits

Author SHA1 Message Date
Yan Chunwei
5506f60037
chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-05-28 18:43:04 +08:00
Lucas Liebenwein
5cdd6bb10f
[AutoDeploy] Increased Model Coverage Mass Migration Week 1 (#4468)
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
Co-authored-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Co-authored-by: sugunav14 <178320438+sugunav14@users.noreply.github.com>
Co-authored-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
Co-authored-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-27 16:43:15 +08:00
Lucas Liebenwein
de409e8468
[AutoDeploy] HF factory improvements (#4371)
* [AutoDeploy] HF factory improvements

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* improve monkey-patches and add unit tests

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

---------

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-19 20:13:43 -07:00
Jinyang Yuan
b618e1f55b
perf: Eliminate the need for attention DP padding when possible (#3439)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Co-authored-by: raccoonliukai <raccoonliu@tencent.com>
2025-05-17 13:30:55 +08:00
Lucas Liebenwein
7c85890ec7
[AutoDeploy] eager pattern matcher new pattern (#4370)
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-16 12:35:44 -04:00
Lucas Liebenwein
0e872ef0b0
[AutoDeploy] fix: proper process group clean up (#4373)
[AutoDeploy] proper process group clean up

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-16 12:35:25 -04:00
Netanel Haber
9cd8148f28
API Breaking Change + Readability: "decoder"->"sampler" (#4121)
* *decoder*->*sampler*; new_tensors_device: dict[str, torch.Tensor] -> device: SampleStateTensors

* **Breaking Change**, as it changes public interfaces, main changes:
* PyTorchConfig [consumed via LLM(pytorch_backend_config)]: Configuration parameters mixed_decoder and enable_trtllm_decoder -> sampler.
* Command-line argument --enable_trtllm_decoder becomes --enable_trtllm_sampler in examples/pytorch/quickstart_advanced.py.

---------

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
2025-05-16 23:52:25 +08:00
Lucas Liebenwein
8e4320ede5
[AutoDeploy] configurable cache resize (#4372)
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-16 10:07:09 -04:00
Fridah-nv
bce281d592
feat: [AutoDeploy] update rope matcher with minor variants (Deepseek) (#3638)
* add docstring to summarize current rope support

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor: replace call_method, adjust inserting order of cos_sin_cache calculation node

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* add unit test for triton rope and ds rope

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* update rope matcher to match DS RoPE, add custom op for reference, add unit test case

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* cache cos[pos_idx].unsqueeze and sin[pos_idxs].unsqueeze

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor doc update

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* separate pattern matching and optimization for explicit and complex rope + minor updates

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* clean rope impl in repo

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* replace fused_flattened_mla_with_cache's rope impl with torch_apply_rope_with_qk_interleaving, update unit test

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* separate layout infer and transpose to a new transformation

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* update rope_with_explicit_freqs and rope_with_input_interleaved to expose unsqueeze_dim and support match_rope_layout, add unit tests

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* solve merge conflict in transform.py, need to fix optimize_rope with cuda graph capture

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor clean up after rebase

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

* fix pre-commit

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* support map to bnsd layout and infer unsqueeze_dim from op

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* fix cos/sin not the same across prompts in the same batch issue when mapping to flashinfer op

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* fix for unit test

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* fix custom op input/output node ordering issue for DeepSeek V3 rope

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* clean code

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* move flattening of cos_sin_cache to the graph, update flashinfer op docstring and test

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* debug transform unit test failure

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

---------

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
2025-05-16 09:55:32 -04:00
Suyog Gupta
b0f7522c82
[AutoDeploy]feat: Add an AutoDeploy compile backend that only calls torch.compile (#4240)
* add a torch-compile backend

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* readme changes

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* plumb torch-compile through build_and_run_ad.py

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* plumb torch-compile through build_and_run_ad.py

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* plumb torch-compile through build_and_run_ad.py

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* add torch-cudagraph backend

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* update readme

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* update readme

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* further enhanced compiler backends

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* further enhance readme

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* better specified defaults in simple_config.py

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* fix typo in simple_config.py

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* updated deepseek-v3 support

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* revert accidental deletion in AD Readme

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

---------

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-16 08:38:15 +08:00
Lucas Liebenwein
4883121477
[AutoDeploy] fix: disable overlap scheduler until supported (#4365)
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-15 16:19:30 -07:00
Kaiyu Xie
b4e5df0ee0
Breaking change: perf: Enable scheduling overlap by default (#4174)
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-05-15 14:27:36 +08:00
Fridah-nv
d008d6412f
feat:[AutoDeploy] Update MoE pattern matcher to drop expert selection logic (#3283)
* update matcher to match expert compute first, then extract other args with LCA

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* support 3D and 2D input in torch.ops.moe.trtllm_fused_moe

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* update custom ops to support 3D and 2D inputs

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

* update deepseek patch

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

---------

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
2025-05-15 13:53:09 +08:00
sugunav14
7c828d767f
feat: [AutoDeploy] DSV3 mla attn ref op (#4272)
* raw ref op + new patch untested

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>

* Added mla attn ref op and unit tests for attn + module patches

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>

* update stray changes in deepseek.py

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>

* Updated stale documentation

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>

* removed stray update in sdpa return shapes

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>

---------

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>
2025-05-15 01:58:20 +08:00
Fridah-nv
21dbd163a7
[TRTLLM-5188] fix: [AutoDeploy] unwaive AD build test (#4273)
* unwaive small build test

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

* unwaive mutigpu/integration tests

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

* fix for torch.compile+flashinfer attention

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

---------

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>
2025-05-14 10:40:12 +08:00
Fridah-nv
3dbb087292
[TRTLLM-5188] fix: [AutoDeploy] update output shape of prepare_fused_mha_metadata_fake (#4199)
* update output shape of fake kernel prepare_fused_mha_metadata_fake

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

---------

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
2025-05-12 11:11:40 -04:00
Lucas Liebenwein
48ed38a2ac
[fix] [AutoDeploy] flashinfer usage on H100 (#4162)
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-09 06:00:57 +08:00
Suyog Gupta
ac2ab9ba36
[AutoDeploy][perf] Further optimize flashinfer backend in AutoDeploy (#4024)
* reuse batch_indices, positions across layers

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* fix flashinfer unit tests

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* simplify call to get_batch_indices_positions

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

* fix call to get_batch_indices_positions

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>

---------

Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
2025-05-06 10:46:36 +08:00
Lucas Liebenwein
be916b19e0
feat: [AutoDeploy] unfusing attention for native support (#3668)
* [AutoDeploy] unfused streamlined attention + caching

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* improved unit testing

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* reviewer feedback

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* some updates to attn_mask handling

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* updated manual benchmarking and cudagraph capture

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

---------

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-02 09:06:49 +08:00
Suyog Gupta
f94af0fb86
[AutoDeploy] Make all ranks agree on kv-cache size (#4007)
* make all ranks agree on kv-cache size

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* minor cleanups

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* use all_gather_object wrapper

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

---------

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
2025-05-02 04:07:28 +08:00
sugunav14
5b9897a8cd
fix: [AutoDeploy] update hf loading for e_score_correction_bias (#3847)
Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>
2025-04-26 02:03:47 +08:00
Lucas Liebenwein
06b914e0f9
feat: [AutoDeploy] generalizing cudagraph to multiple dynamic inputs (#3589)
* generalizing cudagraph to multiple dynamic inputs

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

* fix for failing test

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>

---------

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-04-23 03:38:51 +08:00
QI JUN
112f716155
chore: move all distributed related codes into _torch.distributed directory (#3511)
* move all distributed related codes into _torch.distributed directory

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

---------

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-15 08:39:17 +08:00
QI JUN
d167cbd5bb
refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370)
* remove tensorrt_llm._torch.distributed.ParallelConfig

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* clean

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix embedding test

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix comments

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* polish

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix ci

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* rebase

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

---------

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Co-authored-by: hlu1 <14827759+hlu1@users.noreply.github.com>
2025-04-11 15:34:20 -07:00
Fridah-nv
ec723fa993
feat:[AutoDeploy] Enhance RoPE support (#3115)
* add test to map flashinfer rope op with triton custom rope ops and pytorch rope in fused_mha

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* add rope matcher and unit tests

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* capture cos and sin from graph

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* revert fuse_mha op change

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor update to address comment and remove redundant unit test

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* move view and transpose into graph nodes and update unit test to test custom op directly

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* move view into custom op, update bfs with bound, update custom op return type to be half precision

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* custom op update to support 3D input

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* handle bnsd and bsnd format, update tests, handle 3D cos/sin input to the custom op

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* add llama4 rope test, update custom op with is_neox flag

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* add llama4 style rope to matcher and update unit test

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* separate into two transformations

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* fix when num_head != num_kv_head; add support for cached position_ids and cos_sin_cache in graph; update unit tests

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor update, cache locally and propagate meta info of qk nodes

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor: fix cos_sin_cache not float

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* minor: move cache into matcher

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

---------

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
2025-04-11 23:51:24 +08:00
sugunav14
84fc07b011
feat: [TRTLLM-3510] DeepseekV3 support in AutoDeploy (#3281)
Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>
2025-04-08 21:47:57 +08:00
yuxianq
7b03350527
Add thread leak check and fix thread/memory leak issues. (#3270)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-08 19:03:18 +08:00
Yukun He
c678774c99
feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151)
* Several optimizations and fixings on the Autotuner.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Apply the new Python side Autotuner on current linear for nvFP4 data type.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Apply the new Python side Autotuner on MoE op
* Remove routers from cache key to improve inference perf
* Prevent unnecessary code profiling. Use do_preparation keyword to select which part should be executed during before evaluating any tactic.
* Remove try-catch inside moe profiling process.
* Move default tactic -1 to 0 transforms in cpp runner.
* Revise relavant tests.
* Predefined the bucketizing strategy for fused_moe

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Add specific_profile support for AutoTuner to bypass the standard cache search process for perf optimization
* Add specific_profile for moe
* Add specific profile for linear

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Fixing and revising according to reviewer's suggestions.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Use lru_cache for inference pref optimization.
* Revert gen_custom_cache_key feature

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Replace runner with runner id to achieve a serializable cache.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Code clean up and minor fixings.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Move all tunable runners and custom ops into torch_custom_ops.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Treat min_latency_mode as a independent dynamic tensor. Modify get_valid_tactics to suit for it.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

---------

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-04-08 14:28:36 +08:00
tburt-nv
7a659885e3
chore: remove usernames from comments (#3291)
Signed-off-by: Tyler Burt <195370667+tburt-nv@users.noreply.github.com>
2025-04-05 13:44:28 +08:00
Jinyang Yuan
2fdfa39ea8
fix: Fix an error related to dummy request when MTP is used (#3146) 2025-04-03 11:08:12 +08:00
Suyog Gupta
047f2b234d
perf: [AutoDeploy] Enable AutoDeploy as a backend in trtllm-bench (#3041)
* Enable AutoDeploy as a backend in trtllm-bench

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* update how caches are resized

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* fix: files permission from 100755 to 100644

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* some comments

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* lint

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* Fix function name

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* refactor

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* Remove spurious change

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* Add cursor generated doc strings

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* re-enable ad test

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* some perf cleanup

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* debug ci

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* ensure that overlap scheduler is enabled

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

* Reorder the tests

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>

---------

Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-03-26 14:33:14 -07:00
Jinyang Yuan
6b583f6f83
perf: Enable CUDA graphs when attention DP is used and active requests on different GPUs are uneven (#3010)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Co-authored-by: raccoonliukai <raccoonliu@tencent.com>
2025-03-26 21:09:25 +08:00
yuxianq
268933b5cc
Refactor imports inside tensorrt_llm._torch. (#3015)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-03-26 11:01:07 +08:00
Kaiyu Xie
2631f21089
Update (#2978)
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-03-23 16:39:35 +08:00
Kaiyu Xie
3aa6b11d13
Update TensorRT-LLM (#2936)
* Update TensorRT-LLM

---------

Co-authored-by: changcui <cuichang147@gmail.com>
2025-03-18 21:25:19 +08:00
Kaiyu Xie
9b931c0f63
Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
Kaiyu Xie
77d7fe1eb2
Update TensorRT-LLM (#2849)
* Update TensorRT-LLM

---------

Co-authored-by: aotman <chenhangatm@gmail.com>
2025-03-04 18:44:00 +08:00
Kaiyu Xie
ab5b19e027
Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00