TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Zongfei Jing 49d887f521 Fix dense GEMM integration and add scale factor validation
- Fix c_sf shape calculation: use pad_up(m, 128) // 128 for non-128-aligned m
- Change c_sf dtype to uint8 to match fp4_utils.py SF_DTYPE
- Add scale factor shape and value validation in unit test
- Fix test to handle padded scale factors correctly

Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
2026-01-12 22:11:17 -08:00
..
__init__.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
cpp_custom_ops.py [TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531) 2026-01-05 15:44:37 +08:00
cute_dsl_custom_ops.py Fix dense GEMM integration and add scale factor validation 2026-01-12 22:11:17 -08:00
flashinfer_custom_ops.py [TRTC-1943][feat] Env vars override support in LLM API (#9104) 2025-12-01 10:04:49 -08:00
torch_custom_ops.py [TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531) 2026-01-05 15:44:37 +08:00
trtllm_gen_custom_ops.py [None][perf] TRTLLM MoE maps to lower tuning buckets when ep>1 (#9998) 2026-01-05 17:16:12 +01:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00