TensorRT-LLMs/tensorrt_llm/quantization
Wei-Ming Chen d9fba85396
[OMNIML-2932] [feat] nvfp4 awq support (#8698)
Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com>
2025-12-03 19:47:13 +02:00
..
utils [None][perf] Use fp8 quant kernel in DS3.2 indexer module (#8701) 2025-10-29 12:45:09 +08:00
__init__.py Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
functional.py [https://nvbugs/5410687][fix] Hopper w4a8 groupwise MoE interleave (#6708) 2025-08-07 15:30:16 -07:00
image_processing.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
layers.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
mode.py [OMNIML-2932] [feat] nvfp4 awq support (#8698) 2025-12-03 19:47:13 +02:00
quantize_by_modelopt.py [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
quantize.py [OMNIML-2932] [feat] nvfp4 awq support (#8698) 2025-12-03 19:47:13 +02:00