TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe
Anthony Chang bbea2647b1
Qwen3 supports TRTLLM FP4 MoE backend (#4530)
* MoE TRTLLM backend for Qwen3

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* add extra moe_backend to test

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* address comments

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* conditionally compile kernels on newer archs

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* missing positional arg

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* Update the routing kernels

Signed-off-by: Christina Zhang <christinaz@nvidia.com>

* Revise usage of TLLM_LOG_ERROR

Signed-off-by: Christina Zhang <christinaz@nvidia.com>

* Add unit test for Qwen3 moe (trtllm_gen backend)

Signed-off-by: Christina Zhang <christinaz@nvidia.com>

* improve weight processing speed of moe_backend=TRTLLM; roughly 2x

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* tidy and minor fix

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

* temporarily disable accuracy test that has known issue

Signed-off-by: Anthony Chang <anchengc@nvidia.com>

---------

Signed-off-by: Anthony Chang <anchengc@nvidia.com>
Signed-off-by: Christina Zhang <christinaz@nvidia.com>
Co-authored-by: Christina Zhang <christinaz@nvidia.com>
2025-05-23 18:31:08 +08:00
..
CMakeLists.txt feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
DevKernel.cu feat: TRT-LLM Gen integration for BMM and MoE refactoring (#4280) 2025-05-16 13:31:53 +02:00
DevKernel.h Qwen3 supports TRTLLM FP4 MoE backend (#4530) 2025-05-23 18:31:08 +08:00
RoutingKernel.cu Qwen3 supports TRTLLM FP4 MoE backend (#4530) 2025-05-23 18:31:08 +08:00
RoutingKernel.h Qwen3 supports TRTLLM FP4 MoE backend (#4530) 2025-05-23 18:31:08 +08:00
runner.cu Qwen3 supports TRTLLM FP4 MoE backend (#4530) 2025-05-23 18:31:08 +08:00
runner.h Qwen3 supports TRTLLM FP4 MoE backend (#4530) 2025-05-23 18:31:08 +08:00