TensorRT-LLMs/tensorrt_llm/inputs/__init__.py
rakib-hasan ff3b741045
feat: adding multimodal (only image for now) support in trtllm-bench (#3490)
* feat: adding multimodal (only image for now) support in trtllm-bench

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* fix: add  in load_dataset() calls to maintain the v2.19.2 behavior

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* re-adding prompt_token_ids and using that for prompt_len

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* updating the datasets version in examples as well

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* api changes are not needed

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* moving datasets requirement and removing a missed api change

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* addressing review comments

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* refactoring the quickstart example

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

---------

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>
2025-04-18 07:06:16 +08:00

16 lines
807 B
Python

from .data import PromptInputs, TextPrompt, TokensPrompt, prompt_inputs
from .registry import (ExtraProcessedInputs, InputProcessor,
create_input_processor, register_input_processor)
from .utils import (INPUT_FORMATTER_MAP, default_image_loader,
default_video_loader, format_llava_next_input,
format_qwen2_vl_input, format_vila_input, load_image,
load_video)
__all__ = [
"PromptInputs", "prompt_inputs", "TextPrompt", "TokensPrompt",
"InputProcessor", "create_input_processor", "register_input_processor",
"ExtraProcessedInputs", "load_image", "load_video", "INPUT_FORMATTER_MAP",
"default_image_loader", "default_video_loader", "format_vila_input",
"format_llava_next_input", "format_qwen2_vl_input"
]