Daniel Stokes
|
942841417e
|
opensource: Opensource MOE MXFP8-MXFP4 implementation (#5222)
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
|
2025-06-26 12:18:19 +08:00 |
|
Enwei Zhu
|
4b82b8b4c7
|
[TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP (#5215)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-06-17 15:23:24 +08:00 |
|
Daniel Stokes
|
3a4851b7c3
|
feat: Add Mixture of Experts FP8xMXFP4 support (#4750)
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
|
2025-06-09 13:25:04 +08:00 |
|
Jinyang Yuan
|
5339d367ce
|
[perf] Reduce the workspace size of FP4 activation scales for MoE (#4303)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
|
2025-05-30 09:03:52 +08:00 |
|
djns99
|
87f734b563
|
[https://nvbugs/5297775] fix: Correct memory guard for large MOE tests to account for TP space (#4553)
fix: Correct memory guard for large MOE tests to account for TP space
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
|
2025-05-23 14:57:49 +12:00 |
|
djns99
|
a030a898d1
|
perf: Fuse gemm setup function for SM90/SM100 MOE plugin path (#4146)
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
|
2025-05-21 10:00:36 +08:00 |
|
Yuan Tong
|
4b6c19737b
|
feat: support add internal cutlass kernels as subproject (#3658)
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
|
2025-05-06 11:35:07 +08:00 |
|
Pamela Peng
|
6cdfc54883
|
feat: Add FP8 support for SM 120 (#3248)
* Allow FP8 on SM120
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
* fix sm121
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
* fix
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
* fix pre-commit
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
* review update
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
---------
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
|
2025-04-14 16:05:41 -07:00 |
|
Yibin Li
|
32ae1564bd
|
update FP4 quantize layout (#3045)
Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>
|
2025-04-03 13:13:54 -04:00 |
|
Zongfei Jing
|
c7548ad72c
|
perf: Add optimizations for deepseek in min latency mode (#3093)
* Add optimizations for deepseek min latency
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
* Fix compile error
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
* Update internal cutlass kernel libs
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
* Format code
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
* Resolve conflicts
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
---------
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
|
2025-04-02 09:05:24 +08:00 |
|
Kaiyu Xie
|
3aa6b11d13
|
Update TensorRT-LLM (#2936)
* Update TensorRT-LLM
---------
Co-authored-by: changcui <cuichang147@gmail.com>
|
2025-03-18 21:25:19 +08:00 |
|
Kaiyu Xie
|
9b931c0f63
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
Kaiyu Xie
|
ab5b19e027
|
Update TensorRT-LLM (#2820)
|
2025-02-25 21:21:49 +08:00 |
|
Kaiyu Xie
|
2ea17cdad2
|
Update TensorRT-LLM (#2792)
* Update TensorRT-LLM
---------
Co-authored-by: jlee <jungmoolee@clika.io>
|
2025-02-18 21:27:39 +08:00 |
|
Kaiyu Xie
|
e88da961c5
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
Dan Blanaru
|
16d2467ea8
|
Update TensorRT-LLM (#2755)
* Update TensorRT-LLM
---------
Co-authored-by: Denis Kayshev <topenkoff@gmail.com>
Co-authored-by: akhoroshev <arthoroshev@gmail.com>
Co-authored-by: Patrick Reiter Horn <patrick.horn@gmail.com>
Update
|
2025-02-11 03:01:00 +00:00 |
|