| .. |
|
baichuan
|
Update TensorRT-LLM (#2820)
|
2025-02-25 21:21:49 +08:00 |
|
bert
|
doc: fix path after examples migration (#3814)
|
2025-04-24 02:36:45 +08:00 |
|
bloom
|
Update TensorRT-LLM
|
2024-08-20 18:55:15 +08:00 |
|
chatglm
|
Update TensorRT-LLM (#2820)
|
2025-02-25 21:21:49 +08:00 |
|
clip
|
Update (#2978)
|
2025-03-23 16:39:35 +08:00 |
|
cogvlm
|
Update TensorRT-LLM (#2562)
|
2024-12-11 00:31:05 -08:00 |
|
commandr
|
Update TensorRT-LLM (#2562)
|
2024-12-11 00:31:05 -08:00 |
|
dbrx
|
Update TensorRT-LLM (#1793)
|
2024-06-18 18:18:23 +08:00 |
|
deepseek_v1
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
deepseek_v2
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
dit
|
Support RingAttention in the BertAttention plugin and the DiT model (#3661)
|
2025-05-09 08:06:54 +08:00 |
|
eagle
|
test: Get Eagle tests working (#3593)
|
2025-04-20 00:50:57 +08:00 |
|
enc_dec
|
fix: nvbugs/5075538: fix cross attention mask when decoder input len > 1 (#3585)
|
2025-04-16 08:31:33 +08:00 |
|
falcon
|
Update TensorRT-LLM (#2562)
|
2024-12-11 00:31:05 -08:00 |
|
gemma
|
Solve underallocation in VSWA+/VGQA (#4667)
|
2025-06-12 12:12:46 +08:00 |
|
gpt
|
feat: Add support for fp8 rowwise quantization (#4876)
|
2025-06-14 06:37:48 -07:00 |
|
gptj
|
Update TensorRT-LLM (#2562)
|
2024-12-11 00:31:05 -08:00 |
|
gptneox
|
Update TensorRT-LLM (#1891)
|
2024-07-04 14:37:19 +08:00 |
|
grok
|
Update TensorRT-LLM (#2562)
|
2024-12-11 00:31:05 -08:00 |
|
llama
|
feat: Support Mistral Small 3.1 24B VLM in TRT workflow (#4183)
|
2025-05-14 03:47:22 +08:00 |
|
mamba
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
medusa
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
mllama
|
chore: remove usernames from comments (#3291)
|
2025-04-05 13:44:28 +08:00 |
|
mmdit_sd3
|
Update TensorRT-LLM (#2849)
|
2025-03-04 18:44:00 +08:00 |
|
mpt
|
Update TensorRT-LLM (#1763)
|
2024-06-11 16:59:02 +08:00 |
|
multimodal_encoders
|
Update (#2978)
|
2025-03-23 16:39:35 +08:00 |
|
nemotron_nas
|
test(perf): Add some Llama-3_3-Nemotron-Super-49B-v1 integration-perf-tests (TRT flow, trtllm-bench) (#4128)
|
2025-05-19 12:00:48 -07:00 |
|
opt
|
Add initial EAGLE-3 implementation (#3035)
|
2025-03-29 22:31:24 +08:00 |
|
phi
|
Update (#2978)
|
2025-03-23 16:39:35 +08:00 |
|
phi3
|
Add support for Phi-4-mini (#2990)
|
2025-04-02 08:34:39 +08:00 |
|
qwen
|
chore: Mass integration of release/0.20 (#4898)
|
2025-06-08 23:26:26 +08:00 |
|
recurrentgemma
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
redrafter
|
ReDrafter support for Qwen (#4875)
|
2025-06-28 02:33:10 +08:00 |
|
stdit
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
unet
|
chore: remove usernames from comments (#3291)
|
2025-04-05 13:44:28 +08:00 |
|
__init__.py
|
ReDrafter support for Qwen (#4875)
|
2025-06-28 02:33:10 +08:00 |
|
automodel.py
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
convert_utils.py
|
feat: adding multimodal (only image for now) support in trtllm-bench (#3490)
|
2025-04-18 07:06:16 +08:00 |
|
generation_mixin.py
|
fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399)
|
2025-05-19 14:25:36 -07:00 |
|
model_weights_loader.py
|
Add support for Phi-4-mini (#2990)
|
2025-04-02 08:34:39 +08:00 |
|
modeling_utils.py
|
[TRTLLM-6291] feat: Add user-provided speculative decoding support (#5204)
|
2025-07-07 16:30:43 +02:00 |