Jinyang Yuan
|
0a0f93d4a8
|
[None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
|
2025-10-27 10:18:19 +08:00 |
|
xiweny
|
822cb0115b
|
[TRTLLM-6286] [perf] Add NoSmem epilogue schedule and dynamic cluster shape for sm10x group gemm (#7757)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
Signed-off-by: djns99 <40156487+djns99@users.noreply.github.com>
Co-authored-by: djns99 <40156487+djns99@users.noreply.github.com>
|
2025-09-21 11:38:17 +08:00 |
|
Daniel Stokes
|
109f27265c
|
[None][perf] Add MOE support for dynamic cluster shapes and custom epilogue schedules (#6126)
Signed-off-by: djns99 <40156487+djns99@users.noreply.github.com>
|
2025-09-02 21:54:43 -04:00 |
|
Bo Li
|
bf1b958f1a
|
[TRTLLM-7319][perf] Fuse slicing into MoE. (#6728)
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Sergey Klevtsov <sklevtsov@nvidia.com>
Co-authored-by: Sergey Klevtsov <sklevtsov@nvidia.com>
|
2025-08-25 16:52:30 -04:00 |
|
Daniel Stokes
|
f7c597ec40
|
[None][perf] Make finalize fusion part of the tactic selection logic (#6915)
Signed-off-by: djns99 <40156487+djns99@users.noreply.github.com>
|
2025-08-21 14:08:03 -07:00 |
|
NVJiangShao
|
2f2f5cc72c
|
[TRTLLM-6744][feat] Remove input_sf swizzle for module WideEPMoE (#6231)
Signed-off-by: Jiang Shao <91270701+StudyingShao@users.noreply.github.com>
|
2025-08-08 11:13:42 +08:00 |
|
hlu1
|
8207d5fd39
|
[None] [feat] Add model gpt-oss (#6645)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
|
2025-08-07 03:04:18 -04:00 |
|
Daniel Stokes
|
ae28b3a664
|
feat: Add support for benchmarking individual gemms in MOE benchmark (#6080)
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
|
2025-07-18 09:00:12 +12:00 |
|
Daniel Stokes
|
dd2491f47d
|
fix: Fix MOE benchmark to rotate buffers to prevent L2 cache reuse (#4135)
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
|
2025-07-15 13:40:42 +12:00 |
|
Enwei Zhu
|
ed77ef2ff4
|
fix: Fix MoE benchmark (#5966)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-07-14 15:17:26 +09:00 |
|
Daniel Stokes
|
5773cfdcf2
|
feat: Add support for per expert activation scaling factors (#5013)
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
|
2025-06-28 09:10:35 +12:00 |
|
Daniel Stokes
|
942841417e
|
opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222)
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
|
2025-06-26 12:18:19 +08:00 |
|
Daniel Stokes
|
3a4851b7c3
|
feat: Add Mixture of Experts FP8xMXFP4 support (#4750)
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
|
2025-06-09 13:25:04 +08:00 |
|
Jinyang Yuan
|
5339d367ce
|
[perf] Reduce the workspace size of FP4 activation scales for MoE (#4303)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
|
2025-05-30 09:03:52 +08:00 |
|
Yuan Tong
|
4b6c19737b
|
feat: support add internal cutlass kernels as subproject (#3658)
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
|
2025-05-06 11:35:07 +08:00 |
|
Pamela Peng
|
6cdfc54883
|
feat: Add FP8 support for SM 120 (#3248)
* Allow FP8 on SM120
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
* fix sm121
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
* fix
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
* fix pre-commit
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
* review update
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
---------
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
|
2025-04-14 16:05:41 -07:00 |
|
Zongfei Jing
|
c7548ad72c
|
perf: Add optimizations for deepseek in min latency mode (#3093)
* Add optimizations for deepseek min latency
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
* Fix compile error
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
* Update internal cutlass kernel libs
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
* Format code
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
* Resolve conflicts
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
---------
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
|
2025-04-02 09:05:24 +08:00 |
|
Kaiyu Xie
|
3aa6b11d13
|
Update TensorRT-LLM (#2936)
* Update TensorRT-LLM
---------
Co-authored-by: changcui <cuichang147@gmail.com>
|
2025-03-18 21:25:19 +08:00 |
|
Kaiyu Xie
|
9b931c0f63
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
Kaiyu Xie
|
ab5b19e027
|
Update TensorRT-LLM (#2820)
|
2025-02-25 21:21:49 +08:00 |
|
Kaiyu Xie
|
2ea17cdad2
|
Update TensorRT-LLM (#2792)
* Update TensorRT-LLM
---------
Co-authored-by: jlee <jungmoolee@clika.io>
|
2025-02-18 21:27:39 +08:00 |
|
Kaiyu Xie
|
e88da961c5
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
Dan Blanaru
|
16d2467ea8
|
Update TensorRT-LLM (#2755)
* Update TensorRT-LLM
---------
Co-authored-by: Denis Kayshev <topenkoff@gmail.com>
Co-authored-by: akhoroshev <arthoroshev@gmail.com>
Co-authored-by: Patrick Reiter Horn <patrick.horn@gmail.com>
Update
|
2025-02-11 03:01:00 +00:00 |
|
Kaiyu Xie
|
385626572d
|
Update TensorRT-LLM (#2502)
* Update TensorRT-LLM
---------
Co-authored-by: 岑灿 <yunyi.hyy@alibaba-inc.com>
|
2024-11-26 16:51:34 +08:00 |
|
Kaiyu Xie
|
1730a587d8
|
Update TensorRT-LLM (#2363)
* Update TensorRT-LLM
---------
Co-authored-by: tonylek <137782967+tonylek@users.noreply.github.com>
|
2024-10-22 20:27:35 +08:00 |
|
石晓伟
|
b8fc6633ba
|
Update TensorRT-LLM (#2156)
Co-authored-by: Bruno Magalhaes <bruno.magalhaes@synthesia.io>
|
2024-08-27 18:20:59 +08:00 |
|
Kaiyu Xie
|
74b324f667
|
Update TensorRT-LLM (#2110)
|
2024-08-13 22:34:33 +08:00 |
|
Kaiyu Xie
|
be9cd719f7
|
Update TensorRT-LLM (#2094)
* Update TensorRT-LLM
---------
Co-authored-by: akhoroshev <arthoroshev@gmail.com>
Co-authored-by: Fabian Joswig <fjosw@users.noreply.github.com>
Co-authored-by: Tayef Shah <tayefshah@gmail.com>
Co-authored-by: lfz941 <linfanzai941@gmail.com>
|
2024-08-07 16:44:43 +08:00 |
|
Kaiyu Xie
|
a681853d38
|
Update TensorRT-LLM (#2053)
|
2024-07-30 21:25:01 +08:00 |
|
Kaiyu Xie
|
bca9a33b02
|
Update TensorRT-LLM (#2008)
* Update TensorRT-LLM
---------
Co-authored-by: Timur Abishev <abishev.timur@gmail.com>
Co-authored-by: MahmoudAshraf97 <hassouna97.ma@gmail.com>
Co-authored-by: Saeyoon Oh <saeyoon.oh@furiosa.ai>
Co-authored-by: hattizai <hattizai@gmail.com>
|
2024-07-23 23:05:09 +08:00 |
|
Kaiyu Xie
|
9dbc5b38ba
|
Update TensorRT-LLM (#1891)
* Update TensorRT-LLM
---------
Co-authored-by: Marks101 <markus.schnoes@gmx.de>
Co-authored-by: lkm2835 <lkm2835@gmail.com>
|
2024-07-04 14:37:19 +08:00 |
|
Kaiyu Xie
|
f430a4b447
|
Update TensorRT-LLM (#1688)
* Update TensorRT-LLM
---------
Co-authored-by: IbrahimAmin <ibrahimamin532@gmail.com>
Co-authored-by: Fabian Joswig <fjosw@users.noreply.github.com>
Co-authored-by: Pzzzzz <hello-cd.plus@hotmail.com>
Co-authored-by: CoderHam <hemant@cohere.com>
Co-authored-by: Konstantin Lopuhin <kostia.lopuhin@gmail.com>
|
2024-05-28 20:07:49 +08:00 |
|