TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels
Anthony Chang bbea2647b1
Qwen3 supports TRTLLM FP4 MoE backend (#4530)
* MoE TRTLLM backend for Qwen3

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* add extra moe_backend to test

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* address comments

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* conditionally compile kernels on newer archs

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* missing positional arg

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* Update the routing kernels

Signed-off-by: Christina Zhang <christinaz@nvidia.com>

* Revise usage of TLLM_LOG_ERROR

Signed-off-by: Christina Zhang <christinaz@nvidia.com>

* Add unit test for Qwen3 moe (trtllm_gen backend)

Signed-off-by: Christina Zhang <christinaz@nvidia.com>

* improve weight processing speed of moe_backend=TRTLLM; roughly 2x

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* tidy and minor fix

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* temporarily disable accuracy test that has known issue

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

---------

Signed-off-by: Anthony Chang <anchengc@nvidia.com>
Signed-off-by: Christina Zhang <christinaz@nvidia.com>
Co-authored-by: Christina Zhang <christinaz@nvidia.com>
2025-05-23 18:31:08 +08:00
..
batchedGemm fix: TRT-LLM Gen dtype declaration (#4503) 2025-05-21 23:56:37 +02:00
blockscaleGemm feat: trtllm-gen fp4 GEMM for pytorch workflow (#3423) 2025-04-11 02:28:07 +08:00
blockScaleMoe Qwen3 supports TRTLLM FP4 MoE backend (#4530) 2025-05-23 18:31:08 +08:00
common Cherry-pick trtllm-gen from feat/llama4 to main (#4086) 2025-05-08 14:13:01 -07:00
fmha Feat: add chunked-attention kernels on Blackwell (#4394) 2025-05-21 10:16:46 +08:00
gemm fix: TRT-LLM Gen dtype declaration (#4503) 2025-05-21 23:56:37 +02:00
gemmGatedAct fix: TRT-LLM Gen dtype declaration (#4503) 2025-05-21 23:56:37 +02:00
CMakeLists.txt Cherry-pick trtllm-gen from feat/llama4 to main (#4086) 2025-05-08 14:13:01 -07:00