mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-02-18 16:55:08 +08:00
* feat: enhance trtllm serve multimodal 1. made the load_image and load_video asynchronous 2. add image_encoded input support to be compatible with genai-perf 3. support text-only on multimodal mdoels(currently, Qwen2-VL & Qwen2.5-VL) Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> * add test Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> * fix bandit Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> * trimming uils Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> * trimming for test Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> * genai perf command fix Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> * command fix Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> * refactor chat_utils Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> * stress test genai-perf command Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> --------- Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| apps | ||
| __init__.py | ||
| _run_mpi_comm_task.py | ||
| fake.sh | ||
| grid_searcher.py | ||
| run_llm_exit.py | ||
| run_llm_with_postproc.py | ||
| run_llm.py | ||
| test_build_cache.py | ||
| test_executor.py | ||
| test_llm_args.py | ||
| test_llm_download.py | ||
| test_llm_kv_cache_events.py | ||
| test_llm_models.py | ||
| test_llm_multi_gpu_pytorch.py | ||
| test_llm_multi_gpu.py | ||
| test_llm_perf_evaluator.py | ||
| test_llm_pytorch.py | ||
| test_llm_quant.py | ||
| test_llm_utils.py | ||
| test_llm.py | ||
| test_mpi_session.py | ||
| test_reasoning_parser.py | ||