TensorRT-LLMs/tensorrt_llm/_torch/models
danielafrimi ed801ff74b
[None][fix] Remove expand configuration from mamba2 mixer (#6521)
Signed-off-by: Daniel Afrimi <danielafrimi8@gmail.com>
2025-08-05 04:18:25 -04:00
..
checkpoints [None][fix] Remove expand configuration from mamba2 mixer (#6521) 2025-08-05 04:18:25 -04:00
__init__.py [fix] Fix Mistral3VLM weight-loading & enable in pre-merge (#6105) 2025-07-17 11:04:17 -07:00
modeling_auto.py [feat] Implement model-agnostic one-engine eagle3 (#4778) 2025-06-13 08:11:41 -07:00
modeling_bert.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_clip.py [feat] Enable TP and batching for PixtralVisionModel / Mistral3VLM (#6152) 2025-07-22 11:06:41 -07:00
modeling_deepseekv3.py [fix] Fix DeepSeek w4a8 weight loading (#6498) 2025-08-04 10:12:06 +08:00
modeling_exaone4.py chore: add EXAONE4 accuracy test (#6397) 2025-08-04 10:14:16 +08:00
modeling_gemma3.py [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00
modeling_gemma3vl.py [TRTLLM-6657][feat] Add LoRA support for Gemma3 (#6371) 2025-08-01 09:19:54 -04:00
modeling_hyperclovax.py fix: support mixture of text & multimodal prompts (#6345) 2025-07-30 08:52:31 +08:00
modeling_llama_min_latency.py [Model load] Fix llama min-latency model load (#5883) 2025-07-15 09:29:19 +08:00
modeling_llama.py [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00
modeling_llava_next.py [TRTLLM-6654][feat] Add support for external multimodal embeddings (#6263) 2025-07-30 10:00:15 -04:00
modeling_mistral.py fix: support mixture of text & multimodal prompts (#6345) 2025-07-30 08:52:31 +08:00
modeling_mixtral.py feat: Remove padding in attention DP. (#6064) 2025-07-18 23:30:34 +08:00
modeling_mllama.py feat : support duplicate_kv_weight for qwen3 blockwise scale (#5459) 2025-06-30 11:49:22 +08:00
modeling_multimodal_encoder.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
modeling_multimodal_utils.py [TRTLLM-5059][feat] Add KV cache reuse support for multimodal models (#5444) 2025-07-21 16:11:58 -07:00
modeling_nemotron_h.py [None][fix] Remove expand configuration from mamba2 mixer (#6521) 2025-08-05 04:18:25 -04:00
modeling_nemotron_nas.py [Perf]: Add residual, norm for nemotron_nas models (#6455) 2025-07-30 09:10:38 -07:00
modeling_nemotron.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_phi3.py feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353) 2025-07-30 09:20:16 -07:00
modeling_phi4mm.py [5830][feat] Improve LoRA cache memory control (#6220) 2025-07-31 09:26:38 +03:00
modeling_pixtral.py [feat] Enable TP and batching for PixtralVisionModel / Mistral3VLM (#6152) 2025-07-22 11:06:41 -07:00
modeling_qwen2vl.py [PERF] Move calculation Qwen2-VL's rotary_cos_sin to LLM worker process (#6004) 2025-07-31 09:35:24 +09:00
modeling_qwen3_moe.py [Fix][nvbug 5401163][nvbug 5404726][Qwen3] Fix bug of MoE on tp > 1 with trtllm moe backend (#6235) 2025-07-24 21:47:37 +08:00
modeling_qwen3.py feat(eagle3):support qwen3 dense model (#5879) 2025-07-19 01:24:32 +08:00
modeling_qwen_moe.py [TRTLLM-5493] Add core infrastructure to enable loading of custom checkpoint formats (#5372) 2025-07-17 00:50:30 +08:00
modeling_qwen.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_siglip.py feat: Update Gemma3 Vision Encoder (#5973) 2025-07-14 22:38:10 +08:00
modeling_speculative.py Mtp optimizations round1 (#5689) 2025-07-25 13:48:27 -04:00
modeling_utils.py [fix] Fix DeepSeek w4a8 weight loading (#6498) 2025-08-04 10:12:06 +08:00
modeling_vila.py fix: support mixture of text & multimodal prompts (#6345) 2025-07-30 08:52:31 +08:00