TensorRT-LLMs/tensorrt_llm/_torch/models
Gabriel Wu 0e72e8f7e6
[None][feat] Support EPLB in Qwen3 MoE (#7443)
Signed-off-by: Gabriel Wu <13583761+lucifer1004@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-09-19 16:45:35 +08:00
..
checkpoints [None][fix] enable NvFP4/FP8 quantization for Nemotron-H architecture (#7589) 2025-09-09 11:42:22 +03:00
__init__.py [TRTLLM-6577][feat] Support nano_v2_vlm in pytorch backend (#7207) 2025-09-18 16:26:20 +08:00
modeling_auto.py [TRTLLM-6746][feat] Enable two-model spec dec for MTP Eagle (#7001) 2025-09-18 12:05:36 -04:00
modeling_bert.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_clip.py [feat] Enable TP and batching for PixtralVisionModel / Mistral3VLM (#6152) 2025-07-22 11:06:41 -07:00
modeling_deepseekv3.py [TRTLLM-6746][feat] Enable two-model spec dec for MTP Eagle (#7001) 2025-09-18 12:05:36 -04:00
modeling_exaone4.py [#6186][feat] Introduce QKNormRoPEAttention module (#6830) 2025-09-05 14:04:41 +02:00
modeling_gemma3.py [#6186][feat] Introduce QKNormRoPEAttention module (#6830) 2025-09-05 14:04:41 +02:00
modeling_gemma3vl.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_gpt_oss.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_hunyuan_moe.py [None][fix] fix hunyuan_moe init bug (#7502) 2025-09-04 03:06:00 -04:00
modeling_hyperclovax.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_llama_min_latency.py [Model load] Fix llama min-latency model load (#5883) 2025-07-15 09:29:19 +08:00
modeling_llama.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_llava_next.py [TRTLLM-6903][feat] Support chunked prefill for multimodal models (#6843) 2025-09-14 20:10:10 -07:00
modeling_mistral.py [TRTLLM-7410][feat] Enable KV cache reuse and chunked prefill for mistral3.1 (#7628) 2025-09-17 08:11:16 -07:00
modeling_mixtral.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_mllama.py feat : support duplicate_kv_weight for qwen3 blockwise scale (#5459) 2025-06-30 11:49:22 +08:00
modeling_multimodal_encoder.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
modeling_multimodal_utils.py [TRTLLM-7918][feat] Support kvcache reuse and chunk prefill for phi4mm (#7723) 2025-09-18 17:37:16 +08:00
modeling_nanov2vlm.py [TRTLLM-6577][feat] Support nano_v2_vlm in pytorch backend (#7207) 2025-09-18 16:26:20 +08:00
modeling_nemotron_h.py [None][fix] enable NvFP4/FP8 quantization for Nemotron-H architecture (#7589) 2025-09-09 11:42:22 +03:00
modeling_nemotron_nas.py [Perf]: Add residual, norm for nemotron_nas models (#6455) 2025-07-30 09:10:38 -07:00
modeling_nemotron.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_phi3.py [None][fix] Accommodate Phi3/4 to work with ModelOpt's FP8 ckpts in Torch (#6761) 2025-08-19 09:22:46 -07:00
modeling_phi4mm.py [TRTLLM-7918][feat] Support kvcache reuse and chunk prefill for phi4mm (#7723) 2025-09-18 17:37:16 +08:00
modeling_pixtral.py [TRTLLM-7442][model] Remove unnecessary D2H copies (#7273) 2025-09-03 23:14:20 -04:00
modeling_qwen2vl.py [TRTLLM-6903][feat] Support chunked prefill for multimodal models (#6843) 2025-09-14 20:10:10 -07:00
modeling_qwen3_moe.py [None][feat] Support EPLB in Qwen3 MoE (#7443) 2025-09-19 16:45:35 +08:00
modeling_qwen3.py [None][fix] Revert "Revert "[None][feat] support attention dp for qwen3 dense model"" (#7780) 2025-09-18 20:11:05 +08:00
modeling_qwen_moe.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_qwen.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_radio.py [TRTLLM-6577][feat] Support nano_v2_vlm in pytorch backend (#7207) 2025-09-18 16:26:20 +08:00
modeling_siglip.py feat: Update Gemma3 Vision Encoder (#5973) 2025-07-14 22:38:10 +08:00
modeling_speculative.py [TRTLLM-6746][feat] Enable two-model spec dec for MTP Eagle (#7001) 2025-09-18 12:05:36 -04:00
modeling_utils.py [https://nvbugs/5519544][fix] fix invalid expression for disabling pa… (#7806) 2025-09-18 12:54:52 +08:00
modeling_vila.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00