TensorRT-LLMs/tensorrt_llm/_torch/pyexecutor
Kate Cheng 7dbe618683
feat: Add multimodal embedding field in LlmRequest (#3855)
* Add a new param to LlmRequest and Request to natively support mm

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* update comment

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Update tests to match the new LlmRequest constructor parameters

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Modify unitTest and modify mm_embeding's dict name in llama4

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Fix based on comments

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Fix comment

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Fix LlmRequest initialization in kvCacheManagerTest

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Clean up code for promt_tuning_config

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

* Clean up prompt_tuning_config in GenerationRequest

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>

---------

Signed-off-by: Kate Cheng <yunhsuanc@nvidia.com>
Co-authored-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>
2025-05-01 12:23:30 +08:00
..
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
_util.py fix: get head_dim from model’s config. (#3916) 2025-04-29 23:04:29 +08:00
config.py fix: Fix C++ decoder synchronization in PyTorch (#3106) 2025-04-23 23:55:27 +08:00
cuda_graph_runner.py Support CUDA graphs for EAGLE3 (#3176) 2025-04-17 04:53:50 +08:00
decoder.py feat: return logits in PyTorch flow (#3221) 2025-04-24 16:56:03 -07:00
guided_decoder.py fix: Fix C++ decoder synchronization in PyTorch (#3106) 2025-04-23 23:55:27 +08:00
kv_cache_transceiver.py cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
layerwise_nvtx_marker.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_request.py feat: Add multimodal embedding field in LlmRequest (#3855) 2025-05-01 12:23:30 +08:00
model_engine.py feat: Add multimodal embedding field in LlmRequest (#3855) 2025-05-01 12:23:30 +08:00
py_executor_creator.py Add running E2E LoRA flow (#3648) 2025-04-23 11:19:41 +08:00
py_executor.py [fix] Pad requests to maximum draft length in spec decode (#3957) 2025-04-30 11:02:18 -04:00
resource_manager.py chore: remove DummyKvCacheManager. (#3896) 2025-04-29 09:59:37 +08:00
scheduler.py refactor: collect executor and decoder states into dataclass (#3234) 2025-04-15 16:31:45 +08:00
seq_slot_manager.py fix: Fix C++ decoder synchronization in PyTorch (#3106) 2025-04-23 23:55:27 +08:00