| .. |
|
checkpoints
|
[Issue 6193] Fix gemma3vl weight loader (#6233)
|
2025-07-22 10:32:18 -07:00 |
|
__init__.py
|
[fix] Fix Mistral3VLM weight-loading & enable in pre-merge (#6105)
|
2025-07-17 11:04:17 -07:00 |
|
.gitkeep
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
modeling_auto.py
|
[feat] Implement model-agnostic one-engine eagle3 (#4778)
|
2025-06-13 08:11:41 -07:00 |
|
modeling_bert.py
|
feat: Remove not used padding_idx in models (#5385)
|
2025-06-25 17:19:59 +08:00 |
|
modeling_clip.py
|
[feat] Enable TP and batching for PixtralVisionModel / Mistral3VLM (#6152)
|
2025-07-22 11:06:41 -07:00 |
|
modeling_deepseekv3.py
|
Mtp optimizations round1 (#5689)
|
2025-07-25 13:48:27 -04:00 |
|
modeling_exaone4.py
|
feat: EXAONE4.0 support (#5696)
|
2025-07-14 22:28:10 +09:00 |
|
modeling_gemma3.py
|
[TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372)
|
2025-07-17 00:50:30 +08:00 |
|
modeling_gemma3vl.py
|
[Issue 6193] Fix gemma3vl weight loader (#6233)
|
2025-07-22 10:32:18 -07:00 |
|
modeling_hyperclovax.py
|
feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522)
|
2025-07-07 18:03:12 -07:00 |
|
modeling_llama_min_latency.py
|
[Model load] Fix llama min-latency model load (#5883)
|
2025-07-15 09:29:19 +08:00 |
|
modeling_llama.py
|
[TRTLLM-6445] feat: Enable AllReduce-associated fusion patterns in Llama3/4. (#6205)
|
2025-07-28 09:36:26 +08:00 |
|
modeling_llava_next.py
|
Update transformers to 4.53.0 (#5747)
|
2025-07-09 09:32:24 -07:00 |
|
modeling_mistral.py
|
chore: set default device to cpu on Multimodal models (#5994)
|
2025-07-22 21:45:31 -07:00 |
|
modeling_mixtral.py
|
feat: Remove padding in attention DP. (#6064)
|
2025-07-18 23:30:34 +08:00 |
|
modeling_mllama.py
|
feat : support duplicate_kv_weight for qwen3 blockwise scale (#5459)
|
2025-06-30 11:49:22 +08:00 |
|
modeling_multimodal_encoder.py
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
modeling_multimodal_utils.py
|
[TRTLLM-5059][feat] Add KV cache reuse support for multimodal models (#5444)
|
2025-07-21 16:11:58 -07:00 |
|
modeling_nemotron_h.py
|
[TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372)
|
2025-07-17 00:50:30 +08:00 |
|
modeling_nemotron_nas.py
|
Add basic Nemo Ckpt Lora Loading in pytorch flow (#6019)
|
2025-07-22 19:42:45 -07:00 |
|
modeling_nemotron.py
|
feat: Remove not used padding_idx in models (#5385)
|
2025-06-25 17:19:59 +08:00 |
|
modeling_phi3.py
|
feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644)
|
2025-07-17 06:30:58 +08:00 |
|
modeling_phi4mm.py
|
feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644)
|
2025-07-17 06:30:58 +08:00 |
|
modeling_pixtral.py
|
[feat] Enable TP and batching for PixtralVisionModel / Mistral3VLM (#6152)
|
2025-07-22 11:06:41 -07:00 |
|
modeling_qwen2vl.py
|
chore: set default device to cpu on Multimodal models (#5994)
|
2025-07-22 21:45:31 -07:00 |
|
modeling_qwen3_moe.py
|
[Fix][nvbug 5401163][nvbug 5404726][Qwen3] Fix bug of MoE on tp > 1 with trtllm moe backend (#6235)
|
2025-07-24 21:47:37 +08:00 |
|
modeling_qwen3.py
|
feat(eagle3):support qwen3 dense model (#5879)
|
2025-07-19 01:24:32 +08:00 |
|
modeling_qwen_moe.py
|
[TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372)
|
2025-07-17 00:50:30 +08:00 |
|
modeling_qwen.py
|
feat: Remove not used padding_idx in models (#5385)
|
2025-06-25 17:19:59 +08:00 |
|
modeling_siglip.py
|
feat: Update Gemma3 Vision Encoder (#5973)
|
2025-07-14 22:38:10 +08:00 |
|
modeling_speculative.py
|
Mtp optimizations round1 (#5689)
|
2025-07-25 13:48:27 -04:00 |
|
modeling_utils.py
|
[Fix][nvbug 5401163][nvbug 5404726][Qwen3] Fix bug of MoE on tp > 1 with trtllm moe backend (#6235)
|
2025-07-24 21:47:37 +08:00 |
|
modeling_vila.py
|
feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522)
|
2025-07-07 18:03:12 -07:00 |