TensorRT-LLMs/tensorrt_llm/_torch/models
dominicshanshan c9dca69e1b
[None][chore] Mass integration of release/1.0 - 3rd (#7519)
Signed-off-by: Nave Assaf <nassaf@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
Signed-off-by: qqiao <qqiao@nvidia.com>
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Signed-off-by: Yifei Zhang <219273404+yifeizhang-c@users.noreply.github.com>
Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Signed-off-by: Pamela <179191831+pamelap-nvidia@users.noreply.github.com>
Signed-off-by: Hui Gao <huig@nvidia.com>
Signed-off-by: Alexandre Milesi <30204471+milesial@users.noreply.github.com>
Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
Signed-off-by: Michal Guzek <mguzek@nvidia.com>
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
Signed-off-by: Linda-Stadter <57756729+Linda-Stadter@users.noreply.github.com>
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com>
Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
Co-authored-by: Nave Assaf <55059536+Naveassaf@users.noreply.github.com>
Co-authored-by: Yechan Kim <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Co-authored-by: Emma Qiao <qqiao@nvidia.com>
Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Co-authored-by: Bo Deng <deemod@nvidia.com>
Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Co-authored-by: yifeizhang-c <219273404+yifeizhang-c@users.noreply.github.com>
Co-authored-by: amitz-nv <203509407+amitz-nv@users.noreply.github.com>
Co-authored-by: Erin <14718778+hchings@users.noreply.github.com>
Co-authored-by: chenfeiz0326 <chenfeiz@nvidia.com>
Co-authored-by: ChristinaZ <83400082+ChristinaZ@users.noreply.github.com>
Co-authored-by: Venky <23023424+venkywonka@users.noreply.github.com>
Co-authored-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: HuiGao-NV <huig@nvidia.com>
Co-authored-by: milesial <milesial@users.noreply.github.com>
Co-authored-by: Shi Xiaowei <39303645+Shixiaowei02@users.noreply.github.com>
Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com>
Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com>
Co-authored-by: Guoming Zhang <137257613+nv-guomingz@users.noreply.github.com>
Co-authored-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Co-authored-by: pcastonguay <55748270+pcastonguay@users.noreply.github.com>
Co-authored-by: ruodil <200874449+ruodil@users.noreply.github.com>
Co-authored-by: Linda <57756729+Linda-Stadter@users.noreply.github.com>
Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
Co-authored-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Co-authored-by: Jiagan Cheng <jiaganc@nvidia.com>
Co-authored-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
Co-authored-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-09-08 14:03:04 +08:00
..
checkpoints [TRTLLM-6835][fix] Fix potential hang caused by python multiprocessing when prefetching weights (#6927) 2025-09-01 11:02:31 +08:00
__init__.py [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
modeling_auto.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
modeling_bert.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_clip.py [feat] Enable TP and batching for PixtralVisionModel / Mistral3VLM (#6152) 2025-07-22 11:06:41 -07:00
modeling_deepseekv3.py [None][fix] DeepSeek-R1 W4A8 weight loading issue; fixes regression from #6200 (#7123) 2025-09-07 00:04:56 +08:00
modeling_exaone4.py [#6186][feat] Introduce QKNormRoPEAttention module (#6830) 2025-09-05 14:04:41 +02:00
modeling_gemma3.py [#6186][feat] Introduce QKNormRoPEAttention module (#6830) 2025-09-05 14:04:41 +02:00
modeling_gemma3vl.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_gpt_oss.py [None][fix] Fix perfect router. (#6797) 2025-08-14 20:09:08 -07:00
modeling_hunyuan_moe.py [None][fix] fix hunyuan_moe init bug (#7502) 2025-09-04 03:06:00 -04:00
modeling_hyperclovax.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_llama_min_latency.py [Model load] Fix llama min-latency model load (#5883) 2025-07-15 09:29:19 +08:00
modeling_llama.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_llava_next.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
modeling_mistral.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_mixtral.py [None][perf] Improve the performance of online EPLB on Hopper by better overlapping (#6624) 2025-08-12 09:25:13 +08:00
modeling_mllama.py feat : support duplicate_kv_weight for qwen3 blockwise scale (#5459) 2025-06-30 11:49:22 +08:00
modeling_multimodal_encoder.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
modeling_multimodal_utils.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_nemotron_h.py [TRTLLM-4921][feat] Enable chunked prefill for Nemotron-H (#6334) 2025-08-22 12:15:20 -04:00
modeling_nemotron_nas.py [Perf]: Add residual, norm for nemotron_nas models (#6455) 2025-07-30 09:10:38 -07:00
modeling_nemotron.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_phi3.py [None][fix] Accommodate Phi3/4 to work with ModelOpt's FP8 ckpts in Torch (#6761) 2025-08-19 09:22:46 -07:00
modeling_phi4mm.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_pixtral.py [TRTLLM-7442][model] Remove unnecessary D2H copies (#7273) 2025-09-03 23:14:20 -04:00
modeling_qwen2vl.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00
modeling_qwen3_moe.py [None][perf] Improve the performance of online EPLB on Hopper by better overlapping (#6624) 2025-08-12 09:25:13 +08:00
modeling_qwen3.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
modeling_qwen_moe.py [None][perf] Improve the performance of online EPLB on Hopper by better overlapping (#6624) 2025-08-12 09:25:13 +08:00
modeling_qwen.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_siglip.py feat: Update Gemma3 Vision Encoder (#5973) 2025-07-14 22:38:10 +08:00
modeling_speculative.py [TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481) 2025-09-04 23:30:14 +08:00
modeling_utils.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
modeling_vila.py [TRTLLM-7440][fix] Split fused_input_embed to separate out host sync (#7280) 2025-09-06 23:11:39 -04:00