TensorRT-LLMs/tensorrt_llm/quantization
Aurelien Chartier 1389f5a4d3
feat: Add support for fp8 rowwise quantization (#4876)
Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
Co-authored-by: aikitoria <151776613+aikitoria@users.noreply.github.com>
2025-06-14 06:37:48 -07:00
..
utils Qwen3 supports TRTLLM FP4 MoE backend (#4530) 2025-05-23 18:31:08 +08:00
__init__.py Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
functional.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
image_processing.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
layers.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
mode.py Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
quantize_by_modelopt.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantize.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00