Nikita Korobov
|
3b4f26e4d1
|
[None][feat] update TRT-LLM Gen MoE for NvFp4 + bias with tileN=256 (#9734)
Signed-off-by: Nikita Korobov <14355239+nekorobov@users.noreply.github.com>
|
2025-12-18 11:58:23 +01:00 |
|
Perkz Zheng
|
064b67e40c
|
[https://nvbugs/5727952][fix] a pdl bug in trtllm-gen fmha kernels (#9913)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-12-16 00:34:37 -08:00 |
|
ChristinaZ
|
dff77efa2a
|
[None][feat] Add routing support for the new model for both cutlass and trtllm moe backend (#9792)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
|
2025-12-15 19:59:08 -08:00 |
|
Anthony Chang
|
ad12b795c9
|
[https://nvbugs/5661741][fix] Fix accuracy issue in TRTLLM MoE introduced in #9377 (#9999)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
2025-12-15 03:31:56 -08:00 |
|
Yihan Wang
|
9df4dad3b6
|
[None][fix] Introduce inline namespace to avoid symbol collision (#9541)
Signed-off-by: Yihan Wang <yihwang@nvidia.com>
|
2025-12-12 23:32:15 +08:00 |
|
Perkz Zheng
|
e34302986d
|
[https://nvbugs/5727952][fix] PDL bugs with trtllm-gen fmha kernels (#9863)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-12-10 01:47:03 -08:00 |
|
Enwei Zhu
|
7cd5a67e25
|
[TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (#9592)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-12-05 22:08:52 -08:00 |
|
Perkz Zheng
|
992781dc7b
|
[None][feat] update trtllm-gen nvfp4 kernels with better performance (#9510)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-12-03 21:35:49 +08:00 |
|
Anthony Chang
|
4742c130db
|
[None][feat] Improve TRTLLM MoE in small hidden size throughput cases (#9377)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
2025-11-25 09:09:27 +01:00 |
|
Enwei Zhu
|
13fbd4366a
|
[TRTLLM-9370][feat] Integration of CuteDSL NVFP4 grouped GEMM (Part 2: SwiGLU Fusion and Finalize Fusion) (#9288)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-11-21 14:03:38 -08:00 |
|
Nikita Korobov
|
f2ebaf288a
|
[None][feat] TRT-LLM Gen MoE optimize DeepSeek Fp8 activation kernel (#9175)
Signed-off-by: Nikita Korobov <14355239+nekorobov@users.noreply.github.com>
|
2025-11-21 15:35:00 +01:00 |
|
ChristinaZ
|
fbf6c16cd2
|
[None][fix] Update the default invalid value for deepseek mode of routing (#9222)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
|
2025-11-19 10:14:06 +08:00 |
|
Enwei Zhu
|
7c4777a571
|
[TRTLLM-9286][feat] Integration of CuteDSL NVFP4 grouped GEMM (#8880)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-11-18 17:40:12 -08:00 |
|
Nikita Korobov
|
fe569f0594
|
[None][feat] bias for FP4 TRT-LLM Gen MoE (#9220)
Signed-off-by: Nikita Korobov <14355239+nekorobov@users.noreply.github.com>
|
2025-11-18 09:59:47 -08:00 |
|
Anthony Chang
|
86cfb3ea7e
|
[None][feat] Update TRTLLM MoE cubins; reduce mxfp4 weight padding requirement; tighten TMA bound (#9025)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
2025-11-17 10:04:29 +08:00 |
|
sunnyqgg
|
7862b15a65
|
[TRTLLM-8778][feat] Add tree attention support for blackwell arch (#8975)
Signed-off-by: qgai <qgai@nvidia.com>
|
2025-11-17 09:01:53 +08:00 |
|
dongxuy04
|
a370643b26
|
[None][fix] support topk autotuner input for expert slot per group larger than 32 (#9087)
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
|
2025-11-14 08:37:20 +08:00 |
|
Perkz Zheng
|
22c1748b80
|
[TRTLLM-8816][feat] add optimized trtllm-gen attention kernels on sm103 (#9081)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-11-13 12:41:07 +08:00 |
|
Iman Tabrizian
|
cdde15b275
|
[TRTLLM-8540][feat] Add support for disagg in DSv3.2 (#8735)
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
|
2025-11-12 08:21:11 -08:00 |
|
Perkz Zheng
|
222bc911cd
|
[None][feat] add swapsMmaAb sparseMla kernels (#8913)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-11-05 09:32:34 -08:00 |
|
Perkz Zheng
|
497a07021d
|
[None][update] optimized sparse mla kernels && fix unspecified cuda launch (#8866)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-11-02 22:26:59 -08:00 |
|
Fanrong Li
|
f0dc746738
|
[TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Tracin <10434017+Tracin@users.noreply.github.com>
|
2025-10-31 14:38:31 -07:00 |
|
ChristinaZ
|
13cfd70f57
|
[None][feat] Add unit tests and revision in block_level kernel for invalid input (#8718)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
|
2025-10-30 16:42:18 +08:00 |
|
Anthony Chang
|
8a3b870e09
|
[None][feat] Update TRTLLM MoE MxFP4 cubins; autotune tileN (#8156)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
2025-10-23 09:14:18 +08:00 |
|
ChristinaZ
|
c8b9998acb
|
[TRTLLM-8637][feat] Optimize the routing kernel for DeepseekV3 (MoE CUTLASS backend); Add support for KimiK2 and Qwen-next (MoE TRTLLM backend) (#7761)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
|
2025-10-20 10:08:31 +08:00 |
|
Perkz Zheng
|
0722717ec0
|
[None][fix] trtllm-gen regression in PR 8301 (#8426)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-10-17 03:21:31 -07:00 |
|
ChristinaZ
|
db1c271bc6
|
[None][feat] Revise the calculation related to TileN in routing of MOE TRTLLM backend (#8148)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
|
2025-10-16 09:15:46 +08:00 |
|
Fanrong Li
|
0d20a8fd61
|
[TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
Co-authored-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
|
2025-10-14 08:23:16 -07:00 |
|
Fanrong Li
|
1e0fbb776d
|
[TRTLLM-8536][feat] Update trtllm gen fmha kernels to support block sparse attention (#8301)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
|
2025-10-13 05:54:48 -07:00 |
|
xiweny
|
5ce9719759
|
[https://nvbugs/5503138] [fix] Remove compile warnings (#8167)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
|
2025-10-13 13:24:23 +08:00 |
|
Zhenhuan Chen
|
84d2f12818
|
[TRTLLM-6748][feat] add PDL support for more kernels (#7977)
Signed-off-by: Zhenhuan Chen <chenzhh3671@gmail.com>
|
2025-10-11 08:32:05 +08:00 |
|
xiweny
|
9298f1bdcc
|
[None] [test] Add B300 cases to CI (#8056)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
|
2025-10-06 19:23:31 -07:00 |
|
Nikita Korobov
|
9b3d7cc3e6
|
[None][feat] Update TRT-LLM Gen MoE kernels (#7970)
Signed-off-by: Nikita Korobov <14355239+nekorobov@users.noreply.github.com>
|
2025-10-03 09:22:45 +08:00 |
|
xiweny
|
48e779ae8c
|
[https://nvbugs/5541494] [fix] add back missing sm100f bmm kernels (#8051)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
|
2025-09-29 05:35:44 -04:00 |
|
Guoming Zhang
|
202bed4574
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
|
2025-09-25 21:02:35 +08:00 |
|
Jinyang Yuan
|
b622cde5d5
|
[None][perf] Fix the tactic sorting in TrtllmGenBatchedGemmRunner::getValidConfigIndices (#7419)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
|
2025-09-25 10:27:57 +02:00 |
|
sychen52
|
5a65af24cd
|
[OMNIML-2336][feat] Add NVFP4 x FP8 moe kernels (#7821)
Signed-off-by: Shiyang Chen <shiychen@nvidia.com>
|
2025-09-24 12:14:35 -07:00 |
|
Perkz Zheng
|
60101eb8a5
|
[None][fix] trtllm-gen cubins compiled with wrong arch. (#7953)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-09-24 04:13:36 -07:00 |
|
Perkz Zheng
|
bb64e7462c
|
[None][fix] fix a bug with trtllm-gen kernels + attention sinks (#7919)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-09-23 00:32:04 -07:00 |
|
Pengbo Wang
|
a4b4ed4535
|
[None][fix] Fix and add test for TRTLLM MoE backend (#7755)
Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>
|
2025-09-23 11:26:25 +08:00 |
|
ChristinaZ
|
be576a3152
|
[None] [feat] Enable run_post_quant_allgather for MoE TRTLLM backend (#6794)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
|
2025-09-23 08:24:21 +08:00 |
|
Yuxian Qiu
|
d6ebcf7c4a
|
[TRTLLM-6994][feat] FP8 Context MLA integration (Cherry-pick https://github.com/NVIDIA/TensorRT-LLM/pull/6059 from release/1.1.0rc2) (#7610)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
|
2025-09-19 09:40:49 +08:00 |
|
Perkz Zheng
|
1b29c2e731
|
[None][feat] support gpt-oss with fp8 kv cache (#7612)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-09-15 02:17:37 +08:00 |
|
Perkz Zheng
|
da6cb541a2
|
[None][feat] Optimize MLA kernels with separate reduction kernels (#7597)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-09-09 16:58:44 +08:00 |
|
xiweny
|
0fdc6c7278
|
[TRTLLM-4629] [feat] trtllm-gen kernels support sm103 (#7570)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
|
2025-09-07 10:04:10 +08:00 |
|
sychen52
|
98a1bffb7c
|
[OMNIML-2336][feat] Add NVFP4 x FP8 (#6809)
Signed-off-by: Shiyang Chen <shiychen@nvidia.com>
|
2025-09-04 09:03:38 -07:00 |
|
Tian Zheng
|
e257cb3533
|
[None][feat] Support NVFP4 KV Cache (#6244)
Signed-off-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
|
2025-09-01 09:24:52 +08:00 |
|
ChristinaZ
|
c7269ea93a
|
[https://nvbugs/5392414] [fix] Add customized default routing method (#6818)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
|
2025-08-21 16:58:41 +08:00 |
|
zhhuang-nv
|
7e135d2ea7
|
[None][feat] Use Separate QKV Input Layout for Context MLA (#6538)
Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
|
2025-08-19 22:04:48 +08:00 |
|
ChristinaZ
|
55f4f2d80c
|
[None] [fix] Fix the macro name (#6983)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
|
2025-08-18 03:08:32 -04:00 |
|