TensorRT-LLMs/tensorrt_llm/models
Aurelien Chartier 1389f5a4d3
feat: Add support for fp8 rowwise quantization (#4876)
Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
Co-authored-by: aikitoria <151776613+aikitoria@users.noreply.github.com>
2025-06-14 06:37:48 -07:00
..
baichuan
bert
bloom
chatglm
clip
cogvlm
commandr
dbrx
deepseek_v1
deepseek_v2
dit Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
eagle
enc_dec
falcon
gemma Solve underallocation in VSWA+/VGQA (#4667) 2025-06-12 12:12:46 +08:00
gpt feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
gptj
gptneox
grok
llama feat: Support Mistral Small 3.1 24B VLM in TRT workflow (#4183) 2025-05-14 03:47:22 +08:00
mamba
medusa
mllama
mmdit_sd3
mpt
multimodal_encoders
nemotron_nas test(perf): Add some Llama-3_3-Nemotron-Super-49B-v1 integration-perf-tests (TRT flow, trtllm-bench) (#4128) 2025-05-19 12:00:48 -07:00
opt
phi
phi3
qwen chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
recurrentgemma
redrafter
stdit
unet
__init__.py
automodel.py
convert_utils.py
generation_mixin.py fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
model_weights_loader.py
modeling_utils.py [NVBUG 5301980] Fix fp4 gemm padding. (#4662) 2025-05-27 11:30:53 +08:00