TensorRT-LLMs/tensorrt_llm/models
Omer Ullman Argov 8731f5f14f
chore: Mass integration of release/0.20 (#4898)
Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Signed-off-by: Hui Gao <huig@nvidia.com>
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com>
Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
Signed-off-by: Stanley Sun <190317771+StanleySun639@users.noreply.github.com>
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com>
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: moraxu <mguzek@nvidia.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Co-authored-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Co-authored-by: HuiGao-NV <huig@nvidia.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
Co-authored-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Co-authored-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Co-authored-by: ruodil <200874449+ruodil@users.noreply.github.com>
Co-authored-by: Stanley Sun <190317771+StanleySun639@users.noreply.github.com>
Co-authored-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com>
Co-authored-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Co-authored-by: Faraz <58580514+farazkh80@users.noreply.github.com>
Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
Co-authored-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Co-authored-by: Yechan Kim <161688079+yechank-nvidia@users.noreply.github.com>
2025-06-08 23:26:26 +08:00
..
baichuan Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
bert doc: fix path after examples migration (#3814) 2025-04-24 02:36:45 +08:00
bloom Update TensorRT-LLM 2024-08-20 18:55:15 +08:00
chatglm Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
clip Update (#2978) 2025-03-23 16:39:35 +08:00
cogvlm Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
commandr Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
dbrx Update TensorRT-LLM (#1793) 2024-06-18 18:18:23 +08:00
deepseek_v1 Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
deepseek_v2 Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
dit Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
eagle test: Get Eagle tests working (#3593) 2025-04-20 00:50:57 +08:00
enc_dec fix: nvbugs/5075538: fix cross attention mask when decoder input len > 1 (#3585) 2025-04-16 08:31:33 +08:00
falcon Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
gemma chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
gpt fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
gptj Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
gptneox Update TensorRT-LLM (#1891) 2024-07-04 14:37:19 +08:00
grok Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llama feat: Support Mistral Small 3.1 24B VLM in TRT workflow (#4183) 2025-05-14 03:47:22 +08:00
mamba Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
medusa Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
mllama chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
mmdit_sd3 Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
mpt Update TensorRT-LLM (#1763) 2024-06-11 16:59:02 +08:00
multimodal_encoders Update (#2978) 2025-03-23 16:39:35 +08:00
nemotron_nas test(perf): Add some Llama-3_3-Nemotron-Super-49B-v1 integration-perf-tests (TRT flow, trtllm-bench) (#4128) 2025-05-19 12:00:48 -07:00
opt Add initial EAGLE-3 implementation (#3035) 2025-03-29 22:31:24 +08:00
phi Update (#2978) 2025-03-23 16:39:35 +08:00
phi3 Add support for Phi-4-mini (#2990) 2025-04-02 08:34:39 +08:00
qwen chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
recurrentgemma Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
redrafter fix: redrafter sampling (#3278) 2025-04-08 07:49:32 +08:00
stdit Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
unet chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
__init__.py Add support for Phi-4-MM (#3296) 2025-04-14 14:24:10 +08:00
automodel.py Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
convert_utils.py feat: adding multimodal (only image for now) support in trtllm-bench (#3490) 2025-04-18 07:06:16 +08:00
generation_mixin.py fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
model_weights_loader.py Add support for Phi-4-mini (#2990) 2025-04-02 08:34:39 +08:00
modeling_utils.py [NVBUG 5301980] Fix fp4 gemm padding. (#4662) 2025-05-27 11:30:53 +08:00