TensorRT-LLMs/tensorrt_llm/quantization
SamareshSingh 64ff5cac52
[None][chore] docs: clarify LoRA is not supported with --use_fp8_rowwise in Fp8RowwiseAttention (see #2603) (#10320)
Signed-off-by: Samaresh Kumar Singh <ssam3003@gmail.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: Kanghwan <861393+karljang@users.noreply.github.com>
Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2026-01-19 04:38:00 -05:00
..
utils [None][fix] convert to CUDA tensor before calling _resmooth_kernel. (#10770) 2026-01-17 16:18:34 +08:00
__init__.py Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
functional.py [None][feat] sm100 weight-only kernel (#10190) 2026-01-05 09:44:36 +08:00
image_processing.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
layers.py [None][chore] docs: clarify LoRA is not supported with --use_fp8_rowwise in Fp8RowwiseAttention (see #2603) (#10320) 2026-01-19 04:38:00 -05:00
mode.py [OMNIML-2932] [feat] nvfp4 awq support (#8698) 2025-12-03 19:47:13 +02:00
quantize_by_modelopt.py [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
quantize.py [OMNIML-2932] [feat] nvfp4 awq support (#8698) 2025-12-03 19:47:13 +02:00