TensorRT-LLMs/tensorrt_llm/quantization
Fanrong Li 1bbc0e323b
[None][fix] Pre-allocate workspaces for DeepGEMM MoE to avoid frequent cudaFree/cudaMalloc (#6811)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Co-authored-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-08-13 10:27:57 +08:00
..
utils [None][fix] Pre-allocate workspaces for DeepGEMM MoE to avoid frequent cudaFree/cudaMalloc (#6811) 2025-08-13 10:27:57 +08:00
__init__.py Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
functional.py [https://nvbugs/5410687][fix] Hopper w4a8 groupwise MoE interleave (#6708) 2025-08-07 15:30:16 -07:00
image_processing.py Update TensorRT-LLM (#2582) 2024-12-16 21:50:47 -08:00
layers.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
mode.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
quantize_by_modelopt.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantize.py feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00