TensorRT-LLMs/tensorrt_llm/_torch/models
William Zhang a6a88985cf
[TRTLLM-9409][feat] Pass MRoPE tensors for EPD disagg (#9758)
* Why?

Certain VLMs like the Qwen family need more than just the multimodal
embeddings in the language model, and need MRoPE position IDs and
deltas. Prior to this commit, only the embeddings could be communicated
from the encoder worker to the prefill worker.

* What?

This commit extends the `DisaggregatedParams` to include the MRoPE
information. It also adjusts several pieces of code required to
communicate that between E, P and D workers.

Closes TRTLLM-9409.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2025-12-22 06:32:49 -05:00
..
checkpoints [None][feat] Support Eagle3 on Mistral Large3 (#9971) 2025-12-21 10:25:45 -05:00
__init__.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
modeling_auto.py [None][feat] Fused kernels (qknormrope + moe routing) and two-model MTP support for glm4moe (#9852) 2025-12-14 10:47:24 +08:00
modeling_bert.py [None][chore] replace print_colored_debug with logger_debug (#8417) 2025-10-22 17:54:38 +08:00
modeling_clip.py [None][feat] Support kv_cahce_reuse for HyperCLOVAX-Vision model (#7789) 2025-10-21 11:11:24 +09:00
modeling_deepseekv3.py [None][feat] Support Eagle3 on Mistral Large3 (#9971) 2025-12-21 10:25:45 -05:00
modeling_exaone4.py [https://nvbugs/5569713][fix] Disable fp8 deep gemm for EXAONE-4.0-32B-FP8 (#8429) 2025-11-20 12:43:13 -05:00
modeling_gemma3.py [#6186][feat] Introduce QKNormRoPEAttention module (#6830) 2025-09-05 14:04:41 +02:00
modeling_gemma3vl.py [None][fix] Multimodal InputProcessor dummy builder fix (#8916) 2025-11-19 22:32:21 -08:00
modeling_glm.py [TRTLLM-9992][perf] Enable PDL for CuteDSL kernels and overlap MoeOutputMemset (#10043) 2025-12-20 03:12:41 -05:00
modeling_gpt_oss.py [https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (#8253) 2025-12-03 15:42:15 +01:00
modeling_hunyuan_dense.py [None][feat] Add Tencent HunYuanDenseV1 model support (#7081) 2025-09-23 09:27:29 +08:00
modeling_hunyuan_moe.py [TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (#9486) 2025-12-01 08:37:07 +08:00
modeling_hyperclovax.py [None][fix] Multimodal InputProcessor dummy builder fix (#8916) 2025-11-19 22:32:21 -08:00
modeling_llama_min_latency.py [TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (#9224) 2025-11-26 10:59:06 +08:00
modeling_llama.py [None][feat] spark cublas LUT table for llama-8b-bf16 perf (#9811) 2025-12-12 22:37:56 -05:00
modeling_llava_next.py [TRTLLM-9409][feat] Pass MRoPE tensors for EPD disagg (#9758) 2025-12-22 06:32:49 -05:00
modeling_mistral_large3.py [None][feat] Support Mistral Large3 LLM part (#9820) 2025-12-13 11:44:27 +08:00
modeling_mistral.py [None][feat] Support Eagle3 on Mistral Large3 (#9971) 2025-12-21 10:25:45 -05:00
modeling_mixtral.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_mllama.py feat : support duplicate_kv_weight for qwen3 blockwise scale (#5459) 2025-06-30 11:49:22 +08:00
modeling_multimodal_encoder.py [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
modeling_multimodal_utils.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
modeling_nemotron_h.py [FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (#9261) 2025-12-02 13:40:20 +08:00
modeling_nemotron_nano.py [None][fix] Multimodal InputProcessor dummy builder fix (#8916) 2025-11-19 22:32:21 -08:00
modeling_nemotron_nas.py [None][feat] add specdec to nemotron nas (#8985) 2025-11-19 19:28:35 +01:00
modeling_nemotron.py feat: Remove not used padding_idx in models (#5385) 2025-06-25 17:19:59 +08:00
modeling_phi3.py [https://nvbugs/5540752][fix] Support quantized Phi4 MM models (#8190) 2025-10-20 06:36:09 -04:00
modeling_phi4mm.py [None][fix] Multimodal InputProcessor dummy builder fix (#8916) 2025-11-19 22:32:21 -08:00
modeling_pixtral.py [TRTLLM-7442][model] Remove unnecessary D2H copies (#7273) 2025-09-03 23:14:20 -04:00
modeling_qwen2vl.py [TRTLLM-9409][feat] Pass MRoPE tensors for EPD disagg (#9758) 2025-12-22 06:32:49 -05:00
modeling_qwen3_moe.py [TRTLLM-9992][perf] Enable PDL for CuteDSL kernels and overlap MoeOutputMemset (#10043) 2025-12-20 03:12:41 -05:00
modeling_qwen3_next.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
modeling_qwen3.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
modeling_qwen3vl_moe.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
modeling_qwen3vl.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
modeling_qwen_moe.py [TRTLLM-7408][feat] Wrap MOE with custom op. (#7277) 2025-09-09 12:18:56 -04:00
modeling_qwen.py [None][feat] Support Yarn on QwQ-32B model (#9059) 2025-11-25 07:27:28 +08:00
modeling_radio.py [TRTLLM-8579][feat] Support quantized model for nano-v2-vlm (#8304) 2025-10-16 09:44:11 +08:00
modeling_seedoss.py [None][feat] Support Seed-OSS model in pytorch backend (#7496) 2025-09-24 03:57:12 -07:00
modeling_siglip.py [None][feat] Support kv_cahce_reuse for HyperCLOVAX-Vision model (#7789) 2025-10-21 11:11:24 +09:00
modeling_speculative.py [None][feat] Support Eagle3 on Mistral Large3 (#9971) 2025-12-21 10:25:45 -05:00
modeling_starcoder2.py [TRTLLM-7967][feat] Adding Starcoder2 PyTorch Backend Support (#8923) 2025-11-24 11:23:22 -08:00
modeling_utils.py [TRTLLM-8310][feat] Add Qwen3-VL-MoE (#9689) 2025-12-15 20:05:20 -08:00
modeling_vila.py [None][fix] Multimodal InputProcessor dummy builder fix (#8916) 2025-11-19 22:32:21 -08:00