TensorRT-LLMs/tensorrt_llm/_torch/models
qixiang-99 bf4f7ad744
feat: add Pytorch support of Vision Encoder for multimodal models (#3791)
* feat: Add rename_weights_with_regex function for dynamic weight key renaming

Introduced a new utility function to rename weight keys in a dictionary based on regex pattern matching. This allows for flexible mapping of keys from Hugging Face naming conventions to TRT-LLM naming conventions, enhancing model compatibility and usability.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* feat: Implement SiglipVisionModel and related components

Added the SiglipVisionModel along with its associated classes, including SiglipAttention, SiglipEncoderLayer, and SiglipEncoder.
Additionally, a new test suite for the SiglipVisionModel has been created to ensure compatibility with Hugging Face outputs.

Currently SiglipVisionModel support batch size larger than one. Also, inputs and outputs shape are same with the HF for compatibility.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* feat: Add CLIPVisionModel and associated components

Introduced the CLIPVisionModel along with its related classes, including CLIPAttention, CLIPEncoderLayer, CLIPEncoder, and CLIPVisionTransformer. This implementation aligns with Hugging Face's CLIP architecture, ensuring compatibility in input and output shapes.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* feat: Enhance CLIPVisionModel with attention metadata preparation and unit tests

Updated the CLIPVisionModel to include a method for preparing attention metadata, simplifying the model's usage. Additionally, added a comprehensive unit test suite for the CLIPVisionModel, ensuring compatibility with Hugging Face outputs and validating model performance across various scenarios.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* feat: Refactor SiglipVisionModel with attention metadata preparation and update unit tests

Enhanced the SiglipVisionModel by adding a method to prepare attention metadata, streamlining its usage. Updated unit tests to validate model performance and compatibility with Hugging Face outputs, including adjustments to the configuration and test scenarios.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* refactor: Remove unused rotary_emb parameter from CLIP and Siglip attention classes

Eliminated the rotary_emb parameter from the CLIPAttention and SiglipAttention classes to streamline the code. Updated unit tests to reflect changes in the model configurations, including clarifications in the default configurations sourced from Hugging Face.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* feat: Integrate CLIPVisionModel into LlavaNextInputProcessor and enhance weight loading

Added CLIPVisionModel to the LlavaNextInputProcessor for improved vision processing. Updated the model loading mechanism to ensure compatibility with the new vision model and added attention metadata preparation. Removed debug print statements from weight renaming function for cleaner code.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* refactor: Remove unused max_position_embeddings from CLIPAttention and update Siglip classes to use CLIP components

Removed the unused max_position_embeddings variable from the CLIPAttention class. Updated the Siglip classes to utilize CLIP components, specifically replacing SiglipEncoder and SiglipAttention with their CLIP counterparts, streamlining the codebase and enhancing consistency across models.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* refactor: Consolidate weight loading logic into a shared implementation

Refactored the weight loading process across CLIP and Siglip models by using a new utility function, _load_weights_impl, to streamline the loading mechanism. This change enhances code maintainability and reduces redundancy in weight handling, ensuring consistent behavior across different model architectures.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* refactor: Simplify output handling in CLIP and Siglip models by removing output_hidden_states parameter

Removed the output_hidden_states parameter from the CLIPEncoder and SiglipVisionTransformer classes, streamlining the output handling process. Updated the corresponding unit tests to reflect these changes and ensure compatibility with the new output structure.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* feat: Enhance LlavaNextInputProcessor with dynamic model loading and memory optimization

Updated the LlavaNextInputProcessor to support dynamic model loading from local paths or Hugging Face, improving memory efficiency by partially loading the model components. Integrated the LlavaNextMultiModalProjector and adjusted weight loading to ensure compatibility with the new architecture.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

---------

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>
Co-authored-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>
2025-05-03 05:13:47 +08:00
..
__init__.py feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
.gitkeep Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
modeling_auto.py Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
modeling_bert.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
modeling_clip.py feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
modeling_deepseekv3.py Clean up allreduce op in Deepseek V3 model. (#3829) 2025-05-01 07:56:36 +08:00
modeling_llama.py Llama4 processor fixes (#3994) 2025-05-01 12:45:53 +08:00
modeling_llava_next.py feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
modeling_mamba_hybrid.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
modeling_mistral.py feat: Mistral-Large-2 support in the Pytorch workflow 2025-04-30 20:12:39 +08:00
modeling_mixtral.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
modeling_mllama.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
modeling_multimodal_encoder.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
modeling_multimodal_utils.py feat: llama4 input processor (#3383) 2025-04-25 16:47:14 -07:00
modeling_nemotron_h.py Support NemotronH FP8 Quantization 2025-04-29 18:51:43 +03:00
modeling_nemotron_nas.py Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
modeling_nemotron.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
modeling_qwen2vl.py feat: Add multimodal embedding field in LlmRequest (#3855) 2025-05-01 12:23:30 +08:00
modeling_qwen3_moe.py model: support Qwen3 (#4010) 2025-05-01 23:12:41 +08:00
modeling_qwen3.py model: support Qwen3 (#4010) 2025-05-01 23:12:41 +08:00
modeling_qwen_moe.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
modeling_qwen.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
modeling_siglip.py feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
modeling_utils.py feat: add Pytorch support of Vision Encoder for multimodal models (#3791) 2025-05-03 05:13:47 +08:00
modeling_vila.py feat: Add multimodal embedding field in LlmRequest (#3855) 2025-05-01 12:23:30 +08:00