TensorRT-LLMs/tensorrt_llm/quantization
Tracin 6c91f1c7ac
Mxfp8xmxfp4 quant mode(#4978)
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-06-10 22:01:37 +08:00
..
utils Qwen3 supports TRTLLM FP4 MoE backend (#4530) 2025-05-23 18:31:08 +08:00
__init__.py Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
functional.py feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
image_processing.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
layers.py [NVBUG 5301980] Fix fp4 gemm padding. (#4662) 2025-05-27 11:30:53 +08:00
mode.py Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
quantize_by_modelopt.py feat: adding multimodal (only image for now) support in trtllm-bench (#3490) 2025-04-18 07:06:16 +08:00
quantize.py chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00