TensorRT-LLMs/cpp/tensorrt_llm/common
hlu1 31624b079a
feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387)
* Add TRT-LLM Gen MOE to Deepseek

fix fused moe rebase bug.

Fix atol in test_fp4_gemm_quantize.py

fix fused moe rebase bug.

Fix FusedMoe.

Disable 2nd routing kernel preexit

Bump routing reduction to fp32

Disable PDL for fc1

[DEBUG] Lift token limit to 16k

[Bugfix] Token limit to 16k + fp32 routing + tanh

Make fp8 tileN 8

Fix FP8 MoE + Remove redundent temp output for FP4

[FP8-only] Avoid wasting CTAs for activation kernel

fix: unblock FP8 weightloading with trtllm-gen

Remove max_token limit for trtllm-gen path

perf: avoid type-conversion and fill_ from aten

Minor fix

Signed-off-by: Hao Lu <haolu@nvidia.com>

* Fix rebase issues

Signed-off-by: Hao Lu <haolu@nvidia.com>

* Fix compile issue

Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>

* CI clean

Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>

---------

Signed-off-by: Hao Lu <haolu@nvidia.com>
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
Co-authored-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
2025-04-21 10:01:33 +08:00
..
assert.cpp Update TensorRT-LLM (#1725) 2024-06-04 20:26:32 +08:00
attentionOp.cpp feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
attentionOp.h Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
CMakeLists.txt Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
cublasMMWrapper.cpp Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
cublasMMWrapper.h Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
cublasVersionCheck.h Initial commit 2023-09-20 00:29:41 -07:00
cudaBf16Fallbacks.cuh Update TensorRT-LLM (20240116) (#891) 2024-01-16 20:03:11 +08:00
cudaBufferUtils.cuh Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
cudaDriverWrapper.cpp feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
cudaDriverWrapper.h chore: Stabilize ABI boundary for internal kernel library (#3117) 2025-04-11 15:07:50 +08:00
cudaFp8Utils.cu Add Llama 4 (#3302) 2025-04-09 03:35:21 +08:00
cudaProfilerUtils.cpp Update TensorRT-LLM (#1954) 2024-07-16 15:30:25 +08:00
cudaTypeUtils.cuh Update TensorRT-LLM (#2008) 2024-07-23 23:05:09 +08:00
customAllReduceUtils.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
envUtils.cpp chore: disable some env for disagg defaultly (#3415) 2025-04-14 10:08:10 +08:00
envUtils.h chore: disable some env for disagg defaultly (#3415) 2025-04-14 10:08:10 +08:00
jsonSerializeOptional.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
logger.cpp Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
mathUtils.h Update TensorRT-LLM (#2094) 2024-08-07 16:44:43 +08:00
memoryUtils.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
memoryUtils.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
nvtxUtils.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
opUtils.cpp Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
opUtils.h Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
quantTypeUtils.cuh Update TensorRT-LLM (#2008) 2024-07-23 23:05:09 +08:00
reduceKernelUtils.cuh Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
safetensors.cpp Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
safetensors.h Update TensorRT-LLM (#2110) 2024-08-13 22:34:33 +08:00
stlUtils.h Update TensorRT-LLM (#1763) 2024-06-11 16:59:02 +08:00
stringUtils.cpp chore: Stabilize ABI boundary for internal kernel library (#3117) 2025-04-11 15:07:50 +08:00
timestampUtils.cpp Update TensorRT-LLM (#1954) 2024-07-16 15:30:25 +08:00
timestampUtils.h Update TensorRT-LLM (#1954) 2024-07-16 15:30:25 +08:00
tllmException.cpp chore: Stabilize ABI boundary for internal kernel library (#3117) 2025-04-11 15:07:50 +08:00
workspace.h Update TensorRT-LLM (#2184) 2024-09-03 12:14:23 +02:00