TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe
ChristinaZ a608b00d38
Fix mPtrExpertCounts allocation in MoE TRT-LLM backend (nvfp4) (#5519)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
2025-06-27 20:17:40 +08:00
..
CMakeLists.txt feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
DevKernel.cu feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00
DevKernel.h [feat] trtllmGen MoE routing: added support for top groups and top K bounds (#4063) 2025-06-13 06:00:02 +08:00
IntFastDiv.h [fix] Fix comment to pass guardwords check (#5191) 2025-06-13 15:49:59 +08:00
RoutingKernel.cu Fix mPtrExpertCounts allocation in MoE TRT-LLM backend (nvfp4) (#5519) 2025-06-27 20:17:40 +08:00
RoutingKernel.h Fix mPtrExpertCounts allocation in MoE TRT-LLM backend (nvfp4) (#5519) 2025-06-27 20:17:40 +08:00
runner.cu fix: MoE autotune fallback failed to query default heuristic (#5520) 2025-06-26 17:28:48 +01:00
runner.h [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00