TensorRT-LLMs/tensorrt_llm/quantization
Tracin ef3fdc8051
feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867)
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
2025-06-16 11:30:57 +08:00
..
utils feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867) 2025-06-16 11:30:57 +08:00
__init__.py Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
functional.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
image_processing.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
layers.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
mode.py Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
quantize_by_modelopt.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantize.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00