TensorRT-LLMs/tensorrt_llm/_torch/models
Shiyu Li 6e1aee6fd6
[fix] Performance Optimization for MNNVL TwoShot Kernel (#5934)
Signed-off-by: Shiyu Li <shili@nvidia.com>
Co-authored-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
2025-07-17 10:49:51 +08:00
..
checkpoints [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
__init__.py feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
.gitkeep Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
modeling_auto.py [feat] Implement model-agnostic one-engine eagle3 (#4778) 2025-06-13 08:11:41 -07:00
modeling_bert.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_clip.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
modeling_deepseekv3.py [fix] Performance Optimization for MNNVL TwoShot Kernel (#5934) 2025-07-17 10:49:51 +08:00
modeling_exaone4.py feat: EXAONE4.0 support (#5696) 2025-07-14 22:28:10 +09:00
modeling_gemma3.py [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
modeling_gemma3vl.py feat: Update Gemma3 Vision Encoder (#5973) 2025-07-14 22:38:10 +08:00
modeling_hyperclovax.py feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522) 2025-07-07 18:03:12 -07:00
modeling_llama_min_latency.py [Model load] Fix llama min-latency model load (#5883) 2025-07-15 09:29:19 +08:00
modeling_llama.py [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
modeling_llava_next.py Update transformers to 4.53.0 (#5747) 2025-07-09 09:32:24 -07:00
modeling_mistral.py feat(models): Mistral3.1 VLM pytorch backend support (#5529) 2025-07-09 13:17:40 -07:00
modeling_mixtral.py [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
modeling_mllama.py feat : support duplicate_kv_weight for qwen3 blockwise scale (#5459) 2025-06-30 11:49:22 +08:00
modeling_multimodal_encoder.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
modeling_multimodal_utils.py feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
modeling_nemotron_h.py [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
modeling_nemotron_nas.py feat: Add support for YARN in NemotronNAS models (#4906) 2025-06-29 09:45:49 +03:00
modeling_nemotron.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_phi3.py feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
modeling_phi4mm.py feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
modeling_pixtral.py feat(models): Mistral3.1 VLM pytorch backend support (#5529) 2025-07-09 13:17:40 -07:00
modeling_qwen2vl.py Update transformers to 4.53.0 (#5747) 2025-07-09 09:32:24 -07:00
modeling_qwen3_moe.py [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
modeling_qwen3.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_qwen_moe.py [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
modeling_qwen.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_siglip.py feat: Update Gemma3 Vision Encoder (#5973) 2025-07-14 22:38:10 +08:00
modeling_speculative.py [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
modeling_utils.py [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
modeling_vila.py feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522) 2025-07-07 18:03:12 -07:00