TensorRT-LLMs/tensorrt_llm/models
Izzy Putterman 1ad7bc4c78
[None][feat] Draft: Save state first pass (#7012)
Signed-off-by: Izzy Putterman <iputterman@nvidia.com>
2025-10-01 18:40:55 -04:00
..
baichuan Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
bert doc: fix path after examples migration (#3814) 2025-04-24 02:36:45 +08:00
bloom Update TensorRT-LLM 2024-08-20 18:55:15 +08:00
chatglm Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
clip Update (#2978) 2025-03-23 16:39:35 +08:00
cogvlm Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
commandr Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
dbrx Update TensorRT-LLM (#1793) 2024-06-18 18:18:23 +08:00
deepseek_v1 Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
deepseek_v2 Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
dit Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
eagle [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
enc_dec [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
falcon Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
gemma [https://nvbugs/5496960][fix] Fix Gemma model forward. (#7509) 2025-09-22 14:28:38 +08:00
gpt [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
gptj Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
gptneox Update TensorRT-LLM (#1891) 2024-07-04 14:37:19 +08:00
grok [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
llama [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
mamba Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
medusa [#7208][fix] Fix config type of MedusaConfig (#7320) 2025-09-09 23:25:17 -07:00
mllama [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
mmdit_sd3 [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
mpt Update TensorRT-LLM (#1763) 2024-06-11 16:59:02 +08:00
multimodal_encoders Update (#2978) 2025-03-23 16:39:35 +08:00
nemotron_nas test(perf): Add some Llama-3_3-Nemotron-Super-49B-v1 integration-perf-tests (TRT flow, trtllm-bench) (#4128) 2025-05-19 12:00:48 -07:00
opt Add initial EAGLE-3 implementation (#3035) 2025-03-29 22:31:24 +08:00
phi [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
phi3 [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
qwen [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
recurrentgemma Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
redrafter [refactor] Simplification of Speculative decoding configs (#5639) 2025-07-10 11:37:30 -04:00
stdit [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
unet [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
__init__.py [None][feat] Add Qwen3 MoE support to TensorRT backend (#6470) 2025-08-06 17:02:35 +08:00
automodel.py [nvbug/5387226] chore: add propogation for trust_remote_code to AutoConfig (#6001) 2025-07-16 16:05:38 +08:00
convert_utils.py feat: adding multimodal (only image for now) support in trtllm-bench (#3490) 2025-04-18 07:06:16 +08:00
generation_mixin.py fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
model_weights_loader.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
modeling_utils.py [None][feat] Draft: Save state first pass (#7012) 2025-10-01 18:40:55 -04:00