TensorRT-LLMs/tensorrt_llm/_torch
hlu1 31624b079a
feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387)
* Add TRT-LLM Gen MOE to Deepseek

fix fused moe rebase bug.

Fix atol in test_fp4_gemm_quantize.py

fix fused moe rebase bug.

Fix FusedMoe.

Disable 2nd routing kernel preexit

Bump routing reduction to fp32

Disable PDL for fc1

[DEBUG] Lift token limit to 16k

[Bugfix] Token limit to 16k + fp32 routing + tanh

Make fp8 tileN 8

Fix FP8 MoE + Remove redundent temp output for FP4

[FP8-only] Avoid wasting CTAs for activation kernel

fix: unblock FP8 weightloading with trtllm-gen

Remove max_token limit for trtllm-gen path

perf: avoid type-conversion and fill_ from aten

Minor fix

Signed-off-by: Hao Lu <haolu@nvidia.com>

* Fix rebase issues

Signed-off-by: Hao Lu <haolu@nvidia.com>

* Fix compile issue

Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>

* CI clean

Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>

---------

Signed-off-by: Hao Lu <haolu@nvidia.com>
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
Co-authored-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
2025-04-21 10:01:33 +08:00
..
attention_backend feat: Introduce feature properties for attention backend. (#3659) 2025-04-19 12:37:27 +08:00
auto_deploy chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
compilation feat: Add NVFP4 UB pattern optimization pass in torch compile (#3371) 2025-04-11 21:25:29 +08:00
custom_ops Fix fused_moe fallback issue. (#3652) 2025-04-17 23:17:04 +08:00
distributed chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
models feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
modules feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
peft added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455) 2025-04-17 12:48:27 +08:00
pyexecutor feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
speculative Support CUDA graphs for EAGLE3 (#3176) 2025-04-17 04:53:50 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
pipeline_interface.py Clean up modeling_deepseek.py (#3640) 2025-04-18 17:54:33 -07:00
utils.py Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00