TensorRT-LLMs/tensorrt_llm/quantization
Bo Li 1bab9000a6
perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318)
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
2025-06-26 14:03:56 +08:00
..
utils perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
__init__.py Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
functional.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
image_processing.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
layers.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
mode.py Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
quantize_by_modelopt.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantize.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00