TensorRT-LLMs/tensorrt_llm/_torch/models
Izzy Putterman 8097be7e9c
[None][feat] Eagle, use last hidden post norm (#7546)
Signed-off-by: Izzy Putterman <iputterman@nvidia.com>
2025-09-15 12:23:57 -04:00
..
checkpoints [None][fix] enable NvFP4/FP8 quantization for Nemotron-H architecture (#7589) 2025-09-09 11:42:22 +03:00
__init__.py [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
modeling_auto.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
modeling_bert.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_clip.py [feat] Enable TP and batching for PixtralVisionModel / Mistral3VLM (#6152) 2025-07-22 11:06:41 -07:00
modeling_deepseekv3.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_exaone4.py [#6186][feat] Introduce QKNormRoPEAttention module (#6830) 2025-09-05 14:04:41 +02:00
modeling_gemma3.py [#6186][feat] Introduce QKNormRoPEAttention module (#6830) 2025-09-05 14:04:41 +02:00
modeling_gemma3vl.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_gpt_oss.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_hunyuan_moe.py [None][fix] fix hunyuan_moe init bug (#7502) 2025-09-04 03:06:00 -04:00
modeling_hyperclovax.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_llama_min_latency.py [Model load] Fix llama min-latency model load (#5883) 2025-07-15 09:29:19 +08:00
modeling_llama.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_llava_next.py [TRTLLM-6903][feat] Support chunked prefill for multimodal models (#6843) 2025-09-14 20:10:10 -07:00
modeling_mistral.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_mixtral.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_mllama.py feat : support duplicate_kv_weight for qwen3 blockwise scale (#5459) 2025-06-30 11:49:22 +08:00
modeling_multimodal_encoder.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
modeling_multimodal_utils.py [TRTLLM-6903][feat] Support chunked prefill for multimodal models (#6843) 2025-09-14 20:10:10 -07:00
modeling_nemotron_h.py [None][fix] enable NvFP4/FP8 quantization for Nemotron-H architecture (#7589) 2025-09-09 11:42:22 +03:00
modeling_nemotron_nas.py [Perf]: Add residual, norm for nemotron_nas models (#6455) 2025-07-30 09:10:38 -07:00
modeling_nemotron.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_phi3.py [None][fix] Accommodate Phi3/4 to work with ModelOpt's FP8 ckpts in Torch (#6761) 2025-08-19 09:22:46 -07:00
modeling_phi4mm.py [TRTLLM-7918][feat] Revert "Support kvcache reuse for phi4mm (#7563)" (#7722) 2025-09-15 17:19:44 +08:00
modeling_pixtral.py [TRTLLM-7442][model] Remove unnecessary D2H copies (#7273) 2025-09-03 23:14:20 -04:00
modeling_qwen2vl.py [TRTLLM-6903][feat] Support chunked prefill for multimodal models (#6843) 2025-09-14 20:10:10 -07:00
modeling_qwen3_moe.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_qwen3.py [https://nvbugs/5505402] [fix] Disable deep_gemm for Qwen3 QKNormRoPEAttention and Linear layers due to accuracy issues (#7616) 2025-09-10 18:30:48 +01:00
modeling_qwen_moe.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_qwen.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_siglip.py feat: Update Gemma3 Vision Encoder (#5973) 2025-07-14 22:38:10 +08:00
modeling_speculative.py [None][feat] Eagle, use last hidden post norm (#7546) 2025-09-15 12:23:57 -04:00
modeling_utils.py [None][fix] enable NvFP4/FP8 quantization for Nemotron-H architecture (#7589) 2025-09-09 11:42:22 +03:00
modeling_vila.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00