TensorRT-LLMs/tensorrt_llm/quantization
Zongfei Jing 7bb0a78631
Deepseek R1 FP8 Support on Blackwell (#6486)
Signed-off-by: Barry Kang <43644113+Barry-Delaney@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
Co-authored-by: Barry Kang <43644113+Barry-Delaney@users.noreply.github.com>
Co-authored-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Co-authored-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-08-01 10:26:28 +08:00
..
utils Deepseek R1 FP8 Support on Blackwell (#6486) 2025-08-01 10:26:28 +08:00
__init__.py Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
functional.py [TRTLLM-5863][feat] Support Weight-Only-Quantization in PyTorch Workflow (#5850) 2025-07-21 15:17:35 +08:00
image_processing.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
layers.py [feat] Support torch compile for attention dp (#5086) 2025-07-01 13:48:52 -04:00
mode.py Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
quantize_by_modelopt.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantize.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00