xiweny
9a421d0a00
[ https://nvbugs/5503138 ] [fix] Remove compile warnings ( #9733 )
...
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
2025-12-05 10:29:20 -08:00
QI JUN
267c850792
[TRTLLM-9086][doc] Clean up TODOs in documentation ( #9292 )
...
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-11-27 14:13:00 +08:00
Jin Li
0339255103
[ https://nvbugs/5545522 ][fix] Correct Cutlass with PDL support ( #9335 )
...
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-11-22 09:05:13 -08:00
Iman Tabrizian
4180417b8c
[ https://nvbugs/5601682 ][fix] Fix cacheTransceiver hang ( #9311 )
...
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
2025-11-20 15:19:23 -08:00
Vincent Zhang
08f8f96cbd
[ https://nvbugs/5284463 ][fix] fix ada fp8 group gemm lacks shared memory ( #9044 )
...
Signed-off-by: Vincent Zhang <vinczhang@nvidia.com>
2025-11-11 13:00:47 +08:00
dominicshanshan
def2ad5107
[ https://nvbugs/5575920 ][fix] Fix cublas/cublasLt handle creation memory not sufficient error ( #8900 )
...
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2025-11-07 10:14:00 -08:00
Shiyu Li
519eda29bd
[ https://nvbugs/5597647 ][fix] Fix MNNVL unit test failed due to accuracy issue on Hopper ( #8891 )
...
Signed-off-by: Shiyu Li <shili@nvidia.com>
Signed-off-by: Shiyu Li <timlee0212@outlook.com>
2025-11-06 18:28:06 +01:00
brb-nv
095b7a3ad5
[ https://nvbugs/5521253 ][fix] Enable Gemma3 12B & 27B on SM100 ( #8666 )
...
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-11-03 14:49:36 -08:00
yunruis
07077fb070
[ https://nvbugs/5606268 ][fix] Fix program exit segment fault triggered CublasMMWarpper deconstructor ( #8834 )
...
Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com>
2025-11-03 14:46:01 +08:00
dongxuy04
d81ebb5b4d
[ https://nvbugs/5444687 ][fix] Cherrypick online EPLB CI fix from main to release 1.1 ( #8854 )
...
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
2025-11-03 09:17:51 +08:00
Jin Li
0dac57f2bc
[ https://nvbugs/5569534 ][fix] Warm up with different sizes for more s… ( #8515 )
...
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-10-29 22:29:06 -07:00
Enwei Zhu
c1bac95382
[ https://nvbugs/5422621 ][fix] fix EPLB init hang (cherry-pick #8649 ) ( #8727 )
...
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: dongxuy04 <78518666+dongxuy04@users.noreply.github.com>
2025-10-30 10:31:34 +08:00
Chuang Zhu
b326be25e7
[ https://nvbugs/5578175 ][fix] Fix block range index ( #8470 )
...
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-10-28 11:42:23 -07:00
Shiyu Li
28c9a51c06
[ https://nvbugs/5597647 ][fix] Fix MNNVL Allreduce accuracy issue on Hopper ( #8612 )
...
Signed-off-by: Shiyu Li <shili@nvidia.com>
2025-10-26 23:06:45 -07:00
Yuxian Qiu
4faa5150ab
[ https://nvbugs/5569081 ][fix] Upgrade fmha_v2. (cherry-pick from https://github.com/NVIDIA/TensorRT-LLM/pull/8364 ) ( #8499 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-10-21 12:32:13 +08:00
Iman Tabrizian
82430f84dc
[None][bug] Set NCCL_GRAPH_REGISTER to false to avoid hang ( #8409 )
...
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
2025-10-16 07:40:51 -07:00
Yukun He
fd4311e6a3
[TRTLLM-8129][feat] Allreduce tuning and benchmark script revising ( #7870 )
...
Because we have encountered some perf regression due to using a one-shot kernel instead of NCCL on A100/H100, it will be beneficial if we can have a solid benchmarking of allreduce Op and analyze the data collected from it.
Implemented new AllreduceOp heuristic:
- Added Linear programming-based heuristic implementation.
- Added LUT-based heuristic implementation and corresponding code generation script.
AllreduceOp minor fixing:
- Fixed a minor issue in AllreduceOp, that the strategy can not be overridden when ONESHOT or TWOSHOT is set.
- Fixed a minor TWOSHOT kernel perf issue.
- Cleaned up Dispatching code in AllReduceOp.
This PR will fix the perf gaps reported in:
https://nvbugspro.nvidia.com/bug/5517023
For Deepseek-R1, it shows a performance gain of about 3-4% in concurrency levels of 256 and 512.
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-10-16 14:15:25 +08:00
Zhenhuan Chen
838958c631
[ https://nvbugs/5545522 ][fix] move PREEXIT in UB kernels to fix accuracy issue ( #8318 )
...
Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com>
2025-10-16 09:50:43 +08:00
Patrice Castonguay
fd7a11e11d
[ https://nvbugs/5534837 ][fix] Fix KV cache split on long context ( #8247 )
...
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
Co-authored-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-10-13 11:48:49 -04:00
Chuang Zhu
ad0e91a174
[ https://nvbugs/5546202 ][fix] Fix concurrent bug for NIXL cache transceiver ( #8147 )
...
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-10-13 09:40:56 +02:00
xiweny
72144a40d2
[ https://nvbugs/5541494 ] [fix] Fix missing sm100f/103a kernels and add tests ( #8098 )
...
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
2025-10-07 08:27:55 +08:00
Chuang Zhu
f98fa0cf8b
[None][feat] Optimize kv cache transfer TEP ( #7613 )
...
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-09-25 20:20:04 -07:00
Yuan Tong
fae83c387b
[ #6102 ][fix] support non-system python installation ( #7763 )
...
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-09-26 10:16:15 +08:00
Matthias Jouanneaux
eda1467061
[TRTLLM-5966][feat] Helix: add alltoall op ( #6815 )
...
Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com>
2025-09-25 07:18:29 -07:00
Yueh-Ting (eop) Chen
c5012423f5
[None][chore] Remove developer name in comment ( #7981 )
...
Signed-off-by: eopXD <yuehtingc@nvidia.com>
2025-09-25 06:43:38 -07:00
Guoming Zhang
202bed4574
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. ( #7851 )
...
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2025-09-25 21:02:35 +08:00
Guoming Zhang
9f0f52249e
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … ( #7850 )
...
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2025-09-25 21:02:35 +08:00
Jinyang Yuan
b622cde5d5
[None][perf] Fix the tactic sorting in TrtllmGenBatchedGemmRunner::getValidConfigIndices ( #7419 )
...
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
2025-09-25 10:27:57 +02:00
Void
336c2ef540
[None][feat] DeepEP LL fp8 dispatch/combine ( #7927 )
...
Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
2025-09-25 09:20:24 +08:00
sychen52
5a65af24cd
[OMNIML-2336][feat] Add NVFP4 x FP8 moe kernels ( #7821 )
...
Signed-off-by: Shiyang Chen <shiychen@nvidia.com>
2025-09-24 12:14:35 -07:00
Perkz Zheng
60101eb8a5
[None][fix] trtllm-gen cubins compiled with wrong arch. ( #7953 )
...
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-09-24 04:13:36 -07:00
HuiGao-NV
29e63d3bc2
[ https://nvbugs/5532248 ][fix] Fix fused_moe OOM ( #7931 )
...
Signed-off-by: Hui Gao <huig@nvidia.com>
2025-09-24 02:22:38 -07:00
qsang-nv
929ef4c474
[None][chore] remove cubins for ci cases ( #7902 )
...
Signed-off-by: Qidi Sang <200703406+qsang-nv@users.noreply.github.com>
2025-09-24 14:56:31 +08:00
xiweny
276d83c898
[ https://nvbugs/5532225 ] [fix] MoE use stream-dependent workspace ( #7940 )
...
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
2025-09-24 14:44:27 +08:00
Yueh-Ting (eop) Chen
cf100933cc
[TRTLLM-6341][feature] Support SWA KV cache reuse ( #6768 )
...
This merge request attempts to support more SWA KV cache functionality
inside the KV cache manager. Before this merge request, the KV cache for
sliding window attention (SWA) only holds "window size" number of blocks
and reuse them in a cyclic manner. We will not be able to utilize more
GPU memory with this design, leading to a limited max batch size
throughput. Additionally, we will not be able to support KV cache reuse
with this design.
In this MR, we change such behavior to let the manager write blocks in
a linear manner. With a linear block writing behavior, as the attention
window moves on, the out-of-window (OOW) blocks will be detached. Right
now for the sake of a correct feature first, we directly offload the
OOW block from the primary block pool (GPU memory) to the secondary
block pool (host memory). We will improve this in the future by
delegating the block movement to the eviction policy.
KV cache reuse for SWA is not developed in this merge request and will
be amended in a follow-up merge request.
Writing the blocks linearly, the maximum number of blocks allocated for
a sequence(`GenerationRequest`) is the "max sequence length" specified.
The `GenerationRequest` that stores the cache block bookkeeping
structure will now keep "max sequence length" tokens of blocks.
Given the above, main changes are (more context in the MR):
- Remove "cyclic" concept under the kv cache manager, such concept
originally guards the block reuse under kv cache manager.
- Add detach mechanism and have it under `KVCacheManager::addToken`.
Please note that detach is still guarded off for SWA when reuse
is enabled. A follow-up merge request will proceed to improve this.
- Enforce "max sequence length" to be a non-optional parameter to
the `KVCacheManager`/`BlockManager`
- Let all window size resource pool get identical proportion of memory
- Fix free memory calculation under `resource_manager.py`
Signed-off-by: eopXD <yuehtingc@nvidia.com>
Co-authored-by: Tomer Asida <tasida@nvidia.com>
2025-09-24 14:28:24 +08:00
Jhao-Ting Chen
220dc01372
[None][feat] support JIT mha.cu for SPEC_DEC in runtime ( #6078 )
...
Signed-off-by: Jhao-Ting Chen <jhaotingc@nvidia.com>
2025-09-23 14:56:17 -07:00
Zheng Duan
e3c1a9409f
[TRTLLM-6549][fix] add kv cache time output back ( #7798 )
...
Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com>
2025-09-23 14:12:42 -04:00
Perkz Zheng
bb64e7462c
[None][fix] fix a bug with trtllm-gen kernels + attention sinks ( #7919 )
...
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-09-23 00:32:04 -07:00
Pengbo Wang
a4b4ed4535
[None][fix] Fix and add test for TRTLLM MoE backend ( #7755 )
...
Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>
2025-09-23 11:26:25 +08:00
Pengbo Wang
08cc7a041f
[ https://nvbugs/5355128 ][fix] Add missing wgmma intrinsic for starcoder ( #7643 )
...
Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>
2025-09-23 10:38:58 +08:00
Enwei Zhu
8330d5363a
[TRTLLM-8209][feat] Support new structural tag API (upgrade XGrammar to 0.1.25) ( #7893 )
...
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-09-23 09:10:09 +08:00
ChristinaZ
be576a3152
[None] [feat] Enable run_post_quant_allgather for MoE TRTLLM backend ( #6794 )
...
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
2025-09-23 08:24:21 +08:00
Bo Deng
8cf95681e6
[TRTLLM-7989][infra] Bundle UCX and NIXL libs in the TRTLLM python package ( #7766 )
...
Signed-off-by: Bo Deng <deemod@nvidia.com>
2025-09-22 16:43:35 +08:00
brb-nv
8879ec4d35
[ https://nvbugs/5501557 ][fix] Fix out-of-bounds vector access for model with multiple layer types ( #7636 )
...
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2025-09-22 14:28:38 +08:00
xiweny
822cb0115b
[TRTLLM-6286] [perf] Add NoSmem epilogue schedule and dynamic cluster shape for sm10x group gemm ( #7757 )
...
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
Signed-off-by: djns99 <40156487+djns99@users.noreply.github.com>
Co-authored-by: djns99 <40156487+djns99@users.noreply.github.com>
2025-09-21 11:38:17 +08:00
brb-nv
e10a027a03
[TRTLLM-7731][feat] KV cache transmission in disagg with CP on gen side ( #7624 )
...
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-09-20 06:15:26 -07:00
Mike Iovine
8030b540ac
[ https://nvbugs/5522462 ][fix] Fix FP8 scout illegal memory access ( #7845 )
...
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-09-19 10:30:37 -04:00
Matthias Jouanneaux
1be7faef37
[TRTLLM-5966][feat] Helix: add custom position ids to MLA kernels ( #6904 )
...
Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
2025-09-19 20:55:32 +08:00
Chuang Zhu
c98b9468af
[None][fix] get Local IP by connect remote ( #7719 )
...
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-09-19 10:01:03 +08:00
xiweny
423e5f6a3c
[TRTLLM-6286] [feat] Update CUTLASS to 4.2 and enable SM103 group gemm ( #7832 )
...
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
2025-09-19 09:50:54 +08:00