Yao Yao
|
12e075eb70
|
[nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133)
Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>
|
2025-06-13 15:53:29 +08:00 |
|
Perkz Zheng
|
35c5e4f1c5
|
feat: add CGA reduction fmha kernels on Blackwell. (#3763)
* update cubins
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
* add trtllm-gen kernels for eagle3 and also kernels with cga-reduction
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
* address the comments
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
---------
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
|
2025-04-29 10:43:54 +08:00 |
|
hlu1
|
31624b079a
|
feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387)
* Add TRT-LLM Gen MOE to Deepseek
fix fused moe rebase bug.
Fix atol in test_fp4_gemm_quantize.py
fix fused moe rebase bug.
Fix FusedMoe.
Disable 2nd routing kernel preexit
Bump routing reduction to fp32
Disable PDL for fc1
[DEBUG] Lift token limit to 16k
[Bugfix] Token limit to 16k + fp32 routing + tanh
Make fp8 tileN 8
Fix FP8 MoE + Remove redundent temp output for FP4
[FP8-only] Avoid wasting CTAs for activation kernel
fix: unblock FP8 weightloading with trtllm-gen
Remove max_token limit for trtllm-gen path
perf: avoid type-conversion and fill_ from aten
Minor fix
Signed-off-by: Hao Lu <haolu@nvidia.com>
* Fix rebase issues
Signed-off-by: Hao Lu <haolu@nvidia.com>
* Fix compile issue
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
* CI clean
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
---------
Signed-off-by: Hao Lu <haolu@nvidia.com>
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
Co-authored-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
|
2025-04-21 10:01:33 +08:00 |
|
Julien Debache
|
76a6a62073
|
fix: segfault in cudaDriverWrapper (#3017)
* fix segmentation fault in cudaDriverWrapper
Signed-off-by: jdebache <jdebache@nvidia.com>
* replace cuGetErrorMessage with cuGetErrorString and added tests
Signed-off-by: jdebache <jdebache@nvidia.com>
---------
Signed-off-by: jdebache <jdebache@nvidia.com>
|
2025-04-02 08:55:19 +02:00 |
|
Dan Blanaru
|
16d2467ea8
|
Update TensorRT-LLM (#2755)
* Update TensorRT-LLM
---------
Co-authored-by: Denis Kayshev <topenkoff@gmail.com>
Co-authored-by: akhoroshev <arthoroshev@gmail.com>
Co-authored-by: Patrick Reiter Horn <patrick.horn@gmail.com>
Update
|
2025-02-11 03:01:00 +00:00 |
|
Kaiyu Xie
|
c629546ce4
|
Update TensorRT-LLM (#2436)
|
2024-11-12 15:27:49 +08:00 |
|
Kaiyu Xie
|
89ba1b1a67
|
Update TensorRT-LLM (#1554)
|
2024-05-07 23:34:28 +08:00 |
|
Kaiyu Xie
|
66ef1df492
|
Update TensorRT-LLM (#1492)
* Update TensorRT-LLM
---------
Co-authored-by: Loki <lokravi@amazon.com>
|
2024-04-24 14:44:22 +08:00 |
|
Kaiyu Xie
|
4bb65f216f
|
Update TensorRT-LLM (#1274)
* Update TensorRT-LLM
---------
Co-authored-by: meghagarwal <16129366+megha95@users.noreply.github.com>
Co-authored-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
|
2024-03-12 18:15:52 +08:00 |
|
Kaiyu Xie
|
71f60f6df0
|
Update TensorRT-LLM (#524)
|
2023-12-01 22:27:51 +08:00 |
|
Kevin Xie
|
6e9e318e91
|
Update code
|
2023-09-28 09:00:05 -07:00 |
|
Kaiyu Xie
|
23bc5b7c49
|
Initial commit
|
2023-09-20 00:29:41 -07:00 |
|