TensorRT-LLMs/tensorrt_llm/_torch/models
Luis Vega d26040e5d9
chore: delete mamba hybrid, since it is now called NemotronH (#5409)
Signed-off-by: Luis Vega <vegaluisjose@users.noreply.github.com>
2025-06-24 16:27:31 +08:00
..
__init__.py feat: Basic skeleton for Gemma3 VLM (#5108) 2025-06-13 17:27:04 +08:00
.gitkeep Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
modeling_auto.py [feat] Implement model-agnostic one-engine eagle3 (#4778) 2025-06-13 08:11:41 -07:00
modeling_bert.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
modeling_clip.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
modeling_deepseekv3.py fix: refactor and fix mtp vanilla (#4762) 2025-06-20 05:23:39 +08:00
modeling_gemma3.py feat: Basic skeleton for Gemma3 VLM (#5108) 2025-06-13 17:27:04 +08:00
modeling_gemma3vl.py feat: Basic skeleton for Gemma3 VLM (#5108) 2025-06-13 17:27:04 +08:00
modeling_hyperclovax.py feat: add HyperCLOVAX-SEED-Vision support in refactored way (#4799) 2025-06-09 11:04:04 +08:00
modeling_llama_min_latency.py [fix] Fix llama4 min latency (#5117) 2025-06-11 15:44:38 +08:00
modeling_llama.py feat: Enable EPLB to existing MoE models (#5203) 2025-06-15 11:48:06 +08:00
modeling_llava_next.py feat: add HyperCLOVAX-SEED-Vision support in refactored way (#4799) 2025-06-09 11:04:04 +08:00
modeling_mistral.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
modeling_mixtral.py feat: Enable EPLB to existing MoE models (#5203) 2025-06-15 11:48:06 +08:00
modeling_mllama.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
modeling_multimodal_encoder.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
modeling_multimodal_utils.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
modeling_nemotron_h.py [TRTLLM-5835][feat] Optimized Mamba2Mixer prefill (#5128) 2025-06-16 16:29:17 +03:00
modeling_nemotron_nas.py Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
modeling_nemotron.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
modeling_qwen2vl.py [TRTLLM-5007][feat] Add multimodal hashing support (image hashing) (#4145) 2025-06-10 01:59:56 +08:00
modeling_qwen3_moe.py Refactor CutlassFusedMoE (#5344) 2025-06-19 00:04:07 -07:00
modeling_qwen3.py chore: Refactor apply_rope. (#4918) 2025-06-09 16:51:59 +08:00
modeling_qwen_moe.py feat: Enable EPLB to existing MoE models (#5203) 2025-06-15 11:48:06 +08:00
modeling_qwen.py chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
modeling_siglip.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
modeling_speculative.py [TRTLLM-4983] feat: enable overlap scheduler between draft forwards (#4802) 2025-06-15 23:09:16 +08:00
modeling_utils.py chore: Mass integration of release/0.20 (#5082) 2025-06-17 14:32:02 +03:00
modeling_vila.py feat: add HyperCLOVAX-SEED-Vision support in refactored way (#4799) 2025-06-09 11:04:04 +08:00