TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe
Dom Brown 44fb3c1673
[TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207)
- Adds a new Python custom op (fp8_block_scale_moe_runner) and a FP8BlockScaleMoERunner class for autotuning.
- Updates C++ MoE and batched GEMM kernels to accept a configIndex for workspace sizing and execution.
- Extends the unit test to run both autotuned and non-autotuned code paths.

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-06-17 21:01:56 +08:00
..
CMakeLists.txt feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
DevKernel.cu feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00
DevKernel.h [feat] trtllmGen MoE routing: added support for top groups and top K bounds (#4063) 2025-06-13 06:00:02 +08:00
IntFastDiv.h [fix] Fix comment to pass guardwords check (#5191) 2025-06-13 15:49:59 +08:00
RoutingKernel.cu [feat] trtllmGen MoE routing: added support for top groups and top K bounds (#4063) 2025-06-13 06:00:02 +08:00
RoutingKernel.h [feat] trtllmGen MoE routing: added support for top groups and top K bounds (#4063) 2025-06-13 06:00:02 +08:00
runner.cu [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
runner.h [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00